When Shashi Upadhyay, Kent McCormick and Andrew Schwartz co-founded Lattice Engines in 2006, their plan was to offer data integration solutions. But based on market feedback, they soon changed their business to offer sales intelligence software. Upadhyay, who holds a doctorate in physics from Cornell University and worked with management consulting firm McKinsey for eight years before becoming an entrepreneur, says that the Lattice Engines software integrates internal and external data and then uses predictive analysis to present the most relevant information to its customers’ sales teams. Companies, he notes, are now increasingly realizing that better data and better information leads to better insights into what consumers want; this in turn enables the businesses to use their sales and marketing investments more effectively. In a discussion with India Knowledge at Wharton Upadhyay talks about how the growth of the Internet has transformed the way data can be used, future trends in data and what Lattice Engine hopes to achieve.
India Knowledge at Wharton: Why are so many people looking at “big data” as the next decade trend?
Shashi Upadhyay: The term “big data” is an umbrella word that describes a lot of different things that are going on in the world today. It started with the advent of the Internet where we had this platform that could track user activity in great detail. You could track every click, every site that people went to. When that became possible, a lot of things that people used to dream about, like targeted marketing promotions, measuring marketing ROIs on the fly — all these things that you really couldn’t do [using] television or other forms of media — became possible. As mobile devices are coming on, there’s all kinds of new data that you can capture.
But there’s another story which has not got as much attention — that is big data and enterprises, which is where we’re playing. And that itself has two parts to it. One, over time, companies have been implementing systems that capture data. So they put in a CRM system, then they put in a marketing automation system, a service order system and so on. It was very clear even 10 years ago that people who were doing a good job of putting in systems and capturing the data and analyzing it and forecasting correctly were more productive than others. So these systems have become much more mature inside the companies.
At the same time, you have, outside the firewall, massive amounts of data being generated about the consumers and the users of [different] products — [including] product reviews, analyst reports and surveys. All kinds of information that you really couldn’t know earlier about your customers and about your users have now become widely available. As that’s become available, there’s much more of an appetite to bring it all together. You have more data, you have more information. You have more information, you have better insights into what people want, which means you’re less likely to waste your sales and marketing and other kinds of investments.
India Knowledge at Wharton: On the enterprise side, is it that the software just wasn’t there 10 years ago, or that companies were developing their own proprietary software, or that the math hadn’t advanced enough to handle that level of data?
Upadhyay: The math matters, but it’s not the most important thing. There were two things that needed to happen. One, the technology to collect and store data needed to become cheaper. Two, the mindset that data is actually important and that you can’t just run a large business with gut feel — that needed to become important. It took a while for both of these trends to converge.
India Knowledge at Wharton: So the first part of what you’re saying is that the cost per storing a byte of data is almost zero?
Upadhyay: Yes. The marginal cost of capturing, storing and retrieving data is going toward zero.
India Knowledge at Wharton: So people now are more willing to invest in that capturing and storing?
Upadhyay: Right. And that means you can focus your efforts on analyzing the data and inferring its meaning rather than just moving it from one place to another.
India Knowledge at Wharton: How has the mindset changed?
Upadhyay: In the 1960s and 1970s, there was this big wave of artificial intelligence. So people had thought, “Hey, if I have data, I can create expert systems and I can do interesting things with that.” But that didn’t work out very well because you didn’t have these massive volumes of data that you could capture. So in a sense, the first trend, the fact that it has become cheaper and easier to capture, store and retrieve data, enabled the shift from a mindset of “math and algorithms are nice to have” to “math and algorithms are actually a must have.” Because once you have that kind of data, your algorithms actually perform a lot better. A great algorithm working on poor data is still a poor algorithm. Whereas when you have these huge amounts of data, even a very average algorithm can do a pretty good job. And the Internet created that shift. As more data became available, it became obvious that humans just couldn’t analyze these large volumes and you had to automate the inference process. With the cloud of data out there, it became obvious that, hey, if you are analyzing all this stuff, you’re getting machine-generated insights you couldn’t before, so why am I not doing that with non-Internet data? Why am I not doing that with my CRM data, with my internal transaction data, stuff that’s not in the cloud? That mindset of “let’s measure everything” — that’s kind of come into play more recently.
India Knowledge at Wharton: Are there going to be more and bigger data companies that use hardcore mathematics to actually discover things that even the best analysts cannot find on their own?
Upadhyay: Yeah, it’s a natural progression. Wherever it is possible to automate insight-generation, it makes sense to do so because machines can do this a lot faster and in a more reliable way than humans.
India Knowledge at Wharton: Would you call this moving over into machine learning?
Upadhyay: Well, I’ll call it … predictive analytics or predictive learning. So in a sense the machine is trying to guess something about the future or about an instance, just the way a human being would do. That’s what we as humans do. When we see something, we try to predict what’s going to happen next and we adjust our models of reality based on what happened.
India Knowledge at Wharton: Is there a lot of IP that can be generated from that knowledge?
Upadhyay: Depends on what you mean by IP. If you mean patentable information, I am not sure we are there yet, although there are some examples of complex non-linear circuits that were “discovered” by Genetic Algorithms.
India Knowledge at Wharton: I can see companies that have proprietary technologies in data analysis and discovery actually becoming engines of economic growth and being very profitable in the process because they’ll hold the keys to discoveries that would form the foundations of new companies or new devices or new ideas. Is that something you see happening?
Upadhyay: Well, anyone who is collecting and capturing proprietary data will enable future productivity improvement. For example, we work in the sales world and we are basically trying to capture everything that’s notable about all the things that salespeople do through the day and then correlate that to their performance. Sales teams are incredibly expensive. If you can make them 19% more productive because you’ve aggregated all this data about what they were doing and made it available to other people who could run experiments on this, saying, “I’m pitching this kind of a product. What if I organize my sales activity this way instead of that way?” [or if I] use the Lattice data or whoever’s holding that data and come up with a better model that leads to a more productive sales force, that would lead to more value creation for the economy as a whole.
India Knowledge at Wharton: How would you describe your company Lattice Engines?
Upadhyay: We provide B2B [business to business] sales intelligence software helping Fortune 500 companies sell smarter. Our sales intelligence software integrates both internal and external data about customers and prospects, finds patterns and triggers events, and uses predictive analytics to present the most relevant information to sales reps at the time of interaction. The software works by integrating existing CRM, marketing automation and transactional systems to deliver B2B sales intelligence directly to sales reps.
India Knowledge at Wharton: Could you walk me through an example of how your clients use Lattice Engines?
Upadhyay: Let’s take a large computer hardware company that has a direct sales force of several thousand sales reps, selling around 20 different products. A typical B2B rep covers 400 to 500 accounts. So basically they need to keep track of about 8,000 different things. Every quarter they would go down the list and just call everyone and they say, “Do you want to buy this? Do you want to buy this? Do you want to buy this?” And they would spend the same amount of time every quarter.
What the Lattice Engines solution does is that it takes a look at your transaction data, your purchase and usage data, all these different data sources inside the company, combines it with what’s happening with your customers outside the company and says that, “You have a list of 400 [customers], but you don’t need to call all 400 because half of them just bought last month. Instead you should focus on these 100, and make sure you make contact with them. And there are maybe 100 that you don’t pitch a product to at all and instead spend the time building a relationship.”
India Knowledge at Wharton: How does Lattice Engines actually predict and shortlist out of the 400-plus customers?
Upadhyay: Behind that is a lot of data. Step one is you create a single view of the customer across all of your different interaction points and channels. Everything that the customer has done with you and you have done for the customer gets built into a consistent timeline. Next, you have a notion of similarity. If two customers are identified to be similar, when one of them is doing something, the other is more likely to do those kinds of things. Third, you have triggers. You look at what each of these customers is doing — [for example] one just had a change of CIO, another just opened up a branch office, another is hiring a lot of people. All that information gets put together in one place, and then based on that and our past data, we can model which of these hundreds of variables that we track are good predictors of your different products or solutions. By comparing and contrasting [the different variables] the algorithms come out with, “Here is a set of people that we really think you should call and spend a lot of time on, and here are the set of people that you can probably keep in a low touch mode.”
India Knowledge at Wharton: How did you and your co-founders get the idea for Lattice Engines?
Upadhyay: When we started the company, the original idea was quite different. My co-founders, Kent McCormick, Andrew Schwartz and I, started the company in 2006. The three of us had been working on building a low-cost data integration tool. We knew that there was a lot of data siloed inside companies and that if you could get a handle on the data, you could really solve the customers’ problems. But just getting data out of these systems, putting them together, doing anything analytical out of them was very hard. So we had this approach to data integration called the Lattice model — which is what the company is named after — engines that are built on top of a lattice. It was a very promising direction, but pretty early on what we consistently heard from people that we were trying to convince to participate in this through an Alpha was “You guys are solving an interesting problem, but you’re not taking it all the way to the end. I do data integration because I want to know everything there is to know about my customer and I want to know how they’re going to behave in the future. So why don’t you actually focus on that problem instead of just combining all this data and then leaving the problem up to me?”
India Knowledge at Wharton: How did you raise the money for Lattice Engines?
Upadhyay: We raised a seed round of a few hundred thousand dollars from friends and family. We took that prototype to a bunch of people and we got some consulting engagements. People were willing to pay us to help them integrate data and perform analyses but they didn’t want to buy the tool. We kind of ran the company in that mode for almost a year-and-a-half, just doing services, pure services — not selling software at all. And then in the beginning of 2008, almost two years into the company, we raised about a million and a half dollars, most of it actually came from angels and a little bit of it came from Battery Ventures. We used that to then build the current product suite. So we actually almost took two years to decide, learning the market as to what really was a problem that people wanted solved and wanted to pay us for…. We had our first release in the beginning of 2009 and all the success that the company has had … on the product side has been in the past year and a half.
India Knowledge at Wharton: Does Lattice Engines have a base model for each customer and then customize it or is it up to the customer to fine-tune it?
Upadhyay: Our model doesn’t require company by company customization. It does require transformation of their data into the model that’s inside the application. So if you have an SAP CRM versus a Sales force CRM or a Siebel CRM, data models are different, but in the end you’re still talking about a customer and you’re talking about a product and you’re talking about what geographies they’re in. We have a way of transforming those things so that they actually make sense. Similarly, transaction systems vary a lot from place to place, but ultimately you are selling products to people and they carry a price and they carry a discount. So we take that information and transform it into something that’s meaningful to a sales or marketing executive. That’s part of what integration technology does. Once you have that, we have a second layer which is the prediction engine that runs on top of this to look for patterns.
India Knowledge at Wharton: How is Lattice Engines different from the other data analysis companies that are operating in the same space?
Upadhyay: If you look at the analytics world, there are basically two approaches. The first is the horizontal approach in which companies give their customers a set of tools. That, by the way, was our original strategy, the data integration strategy. Then there’s a second approach in which companies take one industry and try to solve all the problems for that industry. For us that’s B2B. We observed that B2B has very specific problems. It needs very specific types of data. You need to take all that domain knowledge about B2B and make sure that your software is aware of the domain knowledge. That’s what we have done differently: the algorithms that we had to develop, they’re all our own.
India Knowledge at Wharton: Do you see any industries being your strongest customer base right now and how do you expect it to be five years from now?
Upadhyay: Within the B2B space, we are horizontal. Our solution applies to all major B2B verticals.