In February 2011, IBM’s Watson computing system competed on Jeopardy! against two formidable opponents: Ken Jennings, who holds the record for the longest winning streak in Jeopardy! with 74 appearances, and Brad Rudder, who has earned the largest amount of money on the show at $3.25 million. Watson bested its human competitors, coming out roundly on top at the end of the two-game contest, broadcast over three days. Despite uncertainty about whether the human/computer contest was on a level playing field (much of the success in Jeopardy! depends on buzzer timing, and Watson may have had an advantage in knowing when to buzz in), it was an impressive showing.

And it caused many to wonder what this technology might be able to do beyond the realm of a TV game show. Starting in 2012, IBM began to pilot uses of Watson in health care and other fields. The company has since launched a series of products based on the technology, which it calls “cognitive computing.”

Knowledge at Wharton spoke with Brad Becker, chief design officer for IBM Watson, about current and future applications of cognitive computing and how he hopes to make computers “more humane.” An edited version of the conversation follows.

Knowledge at Wharton: Your background is in user experience design. How does that play a role in IBM’s Watson Project?

Becker: [It’s based on] the idea that technology should work for people, not the other way around. For a long time, people have worked to better understand technology. Watson is technology that works to understand us. It’s more humane, it’s helpful to humans, it speaks our language, it can deal with ambiguity, it can create hypotheses, it can learn from us. And, of course, since it’s a computer, it can scale as much as needed and has recall far beyond what humans have.

You take the traditional strength of computers, but in a way that’s more comfortable and efficient for people — more humane, I like to say. And it allows experts, or even non-experts, to do much more than they could otherwise.

Knowledge at Wharton: When you speak of a “more humane” computer, what does that mean?

Becker: Technology, traditionally, is created by technologists. That sounds like it’s a tautology, but the people who are usually creating technology love the technology and accept it as it is. Alan Cooper wrote a book called The Inmates Are Running the Asylum talking about this problem. What’s the solution?

Part of the solution is to take time to focus on who is going to be using the technology, what their needs are, how humans work, what’s the ethnography and the cognitive psychology of the people who are actually using the technology. How do we better fit the technology for humans? It’s sort of like ergonomics with furniture.

Here we use IBM Design Thinking, and we look at the business problem — both for IBM and their clients — as well as using hands-on research to understand the end users and what their specific needs are in context. We also look at things that are more horizontal: How can this technology, in general, work better for people and be at the service of people? Have you ever struggled with technology and thought, “Who came up with this?” Or felt like maybe you were dumb, because you couldn’t understand how to use this tool that was supposedly meant for you?

That is what we’re really after. We’re trying to come up with this idea of cognitive computing future today. The whole focus of this is that technology should work for people, and not the other way around. It starts with what we need and what we think is helpful for humans. How do we help augment humans? The bicycle didn’t replace legs, it augmented what they could do. That’s our goal: to take what humans are good at and then supplement the things humans aren’t good at, such as reading 50 million passages and remembering every word; making it possible for a human, with the help of Watson, to be able to do much more than they could without Watson.

The whole focus of this is that technology should work for people, and not the other way around.

Knowledge at Wharton: Can you explain in basic terms how Watson does what it does?

Becker: [Watson] is not a copy of the human brain, but it takes a similar approach [to solving problems] in that there are multiple, completely separate approaches running in parallel. We handle different kinds of queries differently, depending on the nature of that question.

We’ve also moved beyond just question and answer to discovery, where you’re looking not just for “the answer.” The answer today, if one exists, is not necessarily the same as what the answer will be tomorrow. Things change quickly; there’s ambiguity. Watson is good at dealing with that.

Discovery is an interesting application because you’re looking for weak signals in the noise. It’s a big data problem, but [one] where you’re not just looking for the most obvious things. You’re not just running a linear regression or doing typical shallow machine learning. It’s a great example of the combination of a human expert and Watson working together to sift kind of through this to find the needle in the haystack.

There are some examples in the press recently [describing how] Baylor [University] found quite a few discoveries by plugging Watson [into] all the material that’s available, just to test it. They applied it against older material to see if Watson would come up with the same discoveries that the scientific community had come up with in the last decade, and Watson found several of them within a matter of weeks.

Knowledge at Wharton: So you’re regression testing against things that have already been discovered by humans, and then seeing whether Watson can come to the same conclusion?

Becker: Right. We take 18 years to train a human to get to a level that we call “adult.” Whereas, in Watson, it can be in weeks or months to get Watson trained to provide value in a particular area or domain. In this particular case, Baylor did this to check out the Watson technology for themselves by applying it to their own data. And, sure enough, Watson was able to find those hidden connections that were out there already.

Knowledge at Wharton: You said that the kind of cognitive computing that Watson does is not quite the same as the way humans think. How are they similar and how are they different?

Becker: Well, for one thing, the human mind is interesting in that it’s a very low-power, small, portable computer attached to a stomach that can run on berries and nuts. There are all these physiological aspects of the human brain that are both limitations and strengths. There are specializations of the brain.

We’re not looking to reproduce the human brain, because quite frankly, nature’s done that just fine. We’re looking at the way the human brain works, the way people like to work, and then looking at what traditional computers do and saying: It’s not a great fit. By learning from some of the things the human brain does, we can make computers more useful to people. We don’t want to lose the perfect recall, the ability to scale and the speed at which computers can do rote tasks — but, mixed with some of the learnings from the human brain and the way that all the different pieces of the brain work together in unison to come up with a path to understanding.

We’re also adding a lot of work on natural language processing, so that computers can speak the same language we do. Since the story of the Tower of Babel, we’ve known that being able to speak a common language is important in understanding or working with someone. Until now, computers have only spoken code and you program them. Now, we’re moving to a world where you can actually use human language to work with a computer. Instead of programming a computer, you can train a computer.

Knowledge at Wharton: You mentioned the project IBM did with Baylor University. Tell us more about that.

Becker: The Baylor College of Medicine and IBM joint project used a particular toolkit with Watson to identify proteins that modified p53, which is a protein related to a lot of different cancers. They looked at 70,000 scientific articles on p53, and they were predicting proteins that turn on or off the activity of that protein. They found six potential proteins to target for new research. In general, the industry discovers one protein a year that might be interesting. By looking at 70,000 articles, Watson was able to handle at scale all of that information and came up with six promising directions. Humans will go and explore those.

Knowledge at Wharton: There are a number of different Watson-based products: the medical applications you’ve mentioned, a product called Chef Watson, etc. Can you give an overview of the products that are being spun off of the core technology?

Becker: Yes. We have Watson Engagement Advisor, [which is] a solution that allows you to have a better relationship with your customers. It takes your unstructured data and makes it available to your customers; it assists them to go in for themselves and find the information they’re looking for. You also have the ability as a business, then, to see the questions that are being asked, etc.

The second thing, that we talked about already, is Watson Discovery Advisor, which is about finding those connections and relationships – [for example,] using Watson to look through law enforcement databases and unstructured data to find potential connections between suspects, between events, etc.

We’ve talked about drug discovery where you can look for: What are the connections between different elements that have not been adequately explored? It’s one thing to find an obvious connection, but it’s another to go find a weaker, less obvious or more indirect connection that warrants exploration.

Chef Watson is another Discovery domain, where you’re discovering how there might be interesting connections between different ingredients [in cooking recipes]. The fact that people like ice cream on top of apple pie is not very interesting, but the fact that strawberries and mushrooms have a similar chemical makeup and they might pair well together is interesting — in a sense of discovery — when people are looking for something interesting and new.

Another project is Watson Explorer, which started from Enterprise Search. It’s one thing to explore the bodies of knowledge that already exist, but every institution has their own internal bodies of knowledge, as well. Watson Explorer is a way to collect all the information that’s already in your organization and make it available for people to explore and to find information and answers.

We just announced the Watson Developer Cloud. Now developers can come in, use Watson’s services and IBM Bluemix [IBM’s cloud platform] to create their own cognitive applications based on the services that we expose there. Students, universities, companies and even independent developers can come in and kick the tires and play with cognitive computing.

There are still a lot of things we don’t understand about our own brains and our own behaviors, let alone how technology can help us take them further.

Knowledge at Wharton: Are all of these hosted, cloud-based services?

Becker: No. The Watson Developer Cloud is a hosted SaaS [software as a service] based, scalable solution. There are also solutions that are based on cloud technologies but are hosted locally at the company.

Watson Explorer, for instance, works locally and finds all your information. I think the idea of cloud technologies, APIs and scalable servers are definitely a part of it always, but there are on-premises versions, based on the customer preference.

Knowledge at Wharton: When you talk about Watson working in areas like health care and crime detection, should we be concerned that people will have too much faith in its analysis? For example, we’ve seen cases in which the knowledge that when a spouse is killed the husband or wife is the most likely suspect can circumvent the exploration of other scenarios. Is there a similar concern with Watson that we’ll have too much confidence in its analysis, so that other avenues — which even though they are less likely could still be correct — may not be pursued?

Becker: That’s actually the main purpose of Discovery Advisor — to look for potential, subtle connections, not necessarily the obvious ones. The obvious connections are self-evident, so you don’t need Watson to find [them]. The fact that a spouse is an obvious suspect in a domestic murder case — you don’t need Watson for that.

The Discovery Advisor is focused on the opposite [problem]: looking for all those subtle, indirect, weak signal connections; finding the nonobvious connections and the fertile ground for investigation and for human expertise to pay attention; helping humans find where the needle in the haystack might be.

Knowledge at Wharton: This notion — that computers should work like people, rather than people working like computers — has been around for quite a while, going back to at least Apple’s Macintosh in 1984. In fact, Steve Jobs also used the bicycle analogy you mentioned. He saw the Macintosh as an “engine for the mind.” What’s taken us so long to make progress in this area?

Becker: This is a really hard, worthy challenge. There are a lot of different aspects of it. Understanding people is one challenge. There are still a lot of things we don’t understand about our own brains and our own behaviors, let alone how technology can help us take them further.

But some of this is just common sense, practical things. At IBM, we’ve hired a lot of people who are focused on building more humane computing. We have not just visual designers who make things look appropriate and refine the visual details, but also people who work through the workflows that customers are trying to do.

What is it like to be a life scientist or drug researcher? What’s it like to be a customer support representative? We work with [financial services and insurance company] USAA; what’s it like to be separated from the military? What are the questions you have? What are the concerns? What’s your state of mind? What’s that like?

We actually do ethnography. We sit with these sorts of people to understand what they care about. There are a lot of common patterns, and we’re identifying some of those, but the fact of the matter is that humans are kind of messy and complicated. We’re trying to create technology that interfaces with something that is dynamic and complicated.

Knowledge at Wharton: It seems like for many companies this issue of user experience design is often an afterthought. Is that a fair assessment?

Becker: Traditionally, I think that it was completely an afterthought. Think about a physical space, like a house, where if every time you walked in the house, there was a wall two feet in front of you and you’d run into it, or you’d hit your head on a really low thing. We set up standards for these things. For some reason, the more virtual technology has been a little bit immune to that. I think it’s catching up now. You see this across the web, across the applications — there’s an increased focus in the industry now on user experiences.

Knowledge at Wharton: If an executive came to you and said, “I want to differentiate my product by its user experience,” what advice would you give?

Humans are interesting and complex and they are hard to directly replace.

Becker: The best thing is to get out of the building and watch people use your product. That will tell you what the problems are. For the solutions, there are great ways — like IBM Design Thinking — that help you brainstorm, try out, fail fast, go in and focus and come up with what are the most important solutions for these problems. But it starts with understanding your users and, of course, understanding the capability of your technology. Most tech companies are good at doing that side of it, but understanding your users is the number one thing.

The second thing that I would say is: hire professionals; I would hire people who have experience in this, that have a passion for it. And will know how to shepherd a culture that promotes it.

Ultimately, your culture has to promote it. You mentioned Apple — it’s not that they necessarily have the most designers, but from Steve Jobs on down there was an appreciation of design and the importance that things work well for people and that you keep in mind why you’re doing it. That same culture has been growing at IBM. You have to inculcate a culture that says, “At the end of the day, we’re trying to solve a problem for somebody or provide some sort of value for someone. We’d better understand and be able to articulate what that is.”

Knowledge at Wharton: How will technologies like Watson reshape the employment landscape in the future? Won’t these technologies eliminate a lot of jobs?

Becker: It’s funny, because I was looking at some material in the history of IBM, talking about computers back in the ’60s. There were all these discussions in the ’50s and ’60s about how these new computers were going to replace humans in the office, and there would be no more office jobs. I don’t know about you, but I work in offices and there are a lot of people there — and lots of computers, too.

Humans are interesting and complex and they are hard to directly replace. But cognitive computing can help us with what our own limitations are, and help us to branch beyond those limitations.

Knowledge at Wharton: Any thoughts on when the singularity will occur? When will computer thought outpace that of human beings?

Becker: I can’t see those lines completely converging. I’m not sure we ever get there 100%. The progress in that direction will be surprisingly helpful to us — especially, as we understand ourselves better, we can make our technology work better for us. But it’s not clear to me that there’s going to be a world [in which computers surpass humans]. Because ultimately, a human has to come up with how a computer or a machine could get to the point of creating itself and creating others. It still has to be devised by humans to do this.

Knowledge at Wharton: Looking out ten or more years, then, what is the future for cognitive computing?

Becker: It’s hard to see all the fruit that this will bear. It’s really exciting already, but I think we’re going to see more of the promises fulfilled. It’s going to get easier, it’s going to be faster, it’s going to be more ubiquitous. You’re going see the fulfillment of the promise that technology will be more focused on people, more adaptable to people, more useful, more humane.

I think a lot about how the technology can serve us. Sometimes, as we have more and more technology in our lives, it feels like we’re serving the technology. We look at the carbon footprint of things, which is great, but I think there’s a responsibility in the tech industry to make sure that you also look at the cognitive footprints of the technology that you’re creating. Is it more of a burden than a blessing? We want to make sure that we understand the users and what they need, so that we can create a technology that is adapted to that, and is a net positive benefit.

That combination of making sure that technology serves us and looking, in particular, at Watson and how cognitive computing is going to be able to fit with humans a lot better than the traditional super calculators we’ve had, I think that’s exciting.

Image credit: “IBM Watson” by Clockready – Own work. Licensed under Creative Commons Attribution-Share Alike 3.0 via Wikimedia Commons.