Wharton’s Elizabeth (Zab) Johnson and Michael Platt join Eric Bradlow, vice dean of Analytics at Wharton, to discuss how AI is being used in neuroscience to better understand the human brain. The conversation covers remarkable research developments in measuring brain activity, replicating speech and mobility, mental health, and more. This interview is part of a special 10-part series called “AI in Focus.”
Watch the video or read the full transcript below.
Transcript
Eric Bradlow: Welcome everyone to the latest edition of the Analytics at Wharton, AI at Wharton podcast series. We’re doing a series here on artificial intelligence. Today’s episode is looking to be extremely exciting. I’m happy to be joined today by two of my colleagues. The first is “Zab” Elizabeth Johnson, who is the executive director and senior fellow of the Wharton Neuroscience Initiative. And my colleague Michael Platt, who — this will take a minute, listeners — who is the faculty director of the Wharton Neuroscience Initiative. He’s the James S. Riepe, Penn Integrates Knowledge Professor. He’s in my home Department of Marketing. He’s also in the Department of Neuroscience in the Perelman School of Medicine. He’s also in the psychology department in the School of Arts and Sciences. Michael and Zab, welcome to our podcast.
Zab Johnson: Thanks for having us, Eric.
Michael Platt: Yes, thanks Eric. It’s great to be here.
Bradlow: Why don’t we start with the basics? Zab, I’ll start with you. For our listeners who don’t know, of course they can go to your website and see all about it, but what is the Wharton Neuroscience Initiative? And then we’ll get into what it has to do with our episode today, AI and Neuroscience.
Johnson: Great. The Wharton Neuroscience Initiative is a research center under Analytics at Wharton here at the Wharton School. Really, what we’re doing is to help businesses, individuals, and society writ large to understand why neuroscience might impact their lives. We’re working on all different levels, both education, of course — here we are the University of Pennsylvania, trying to lead the charge in encouraging curiosity and engagement in the neurosciences writ large. We’re not trying to change business students into neuroscientists, but we want them to be aware of what the power of looking under the hood is, at brain activity itself.
And then we have research, an active research portfolio. A lot of times we do that in conjunction with companies to answer and think about questions that haven’t yet had an academic and practice partnership, to do bigger and better things out in the wild. Of course, engagement is really our third pillar. It’s encouraging people to think very broadly about this 3-pound organ that they have inside their heads and to think about how that might impact their lives, their own individual work, and how they live their lives.
Bradlow: Michael, maybe a question for you. I’ve always said that people ask me what I do for a living, and I say, “I’m a professor. I’m a statistician.” But I really say, “I’m a professional data-chaser. I chase interesting forms of data.” Could you tell us, since you and I are the same age, we’ve been in academia the same basic amount of time. How has technology changed the field of neuroscience? When I was a young researcher, you had to put people over in an fMRI machine, if those even existed. You’ll even tell me if those existed 30 years ago. We had the people in fMRI to get brain activity. How has technology just changed that aspect of what you do?
Platt: That’s a great question, and I think that what’s interesting from the perspective of neuroscience is the kind of data that we get. So, it’s not directly observable data. It’s not data that people can verbally express typically because we don’t have good access to what’s going on in their heads, and just being asked changes our appreciation of that. Technology in neuroscience is bewildering now, so it is just every year and every day, just exploding the myriad ways in which we can measure and actually manipulate brain structure and brain function. The vast majority of those technologies are not really readily applied to humans, because they would involve putting something in your head.
Now that said, there is an active race in the private sector even right now, Neuralink being one amongst many companies that is creating and building implants with a vision that someday, maybe all of us will be perfectly happy to have some sort of technology within our heads that could allow us to communicate with computers, even communicate with each other. But fMRI still exists. It has been around for some 40 years, but it is still a really great tool for peering deep into the brain. But it’s expensive, it’s cumbersome, it’s not very scalable. You can’t put it on a consumer’s head while they’re walking around shopping at Walmart, or on your employee’s head while they’re at work. So, we reserve it for specific kinds of studies, like to test hypotheses about why did somebody make a risk-averse decision, versus not. It provides kind of a foundation. What we try to do from there is to use that as a springboard for applying other, more scalable, technologies that can be done with many more people in a lighter-weight, cheaper kind of fashion that’s more useful for business.
I think that’s where we are, in a really exciting place in the last decade, especially in the last couple of years, which is the development of very high signal quality, wearable neurotechnology. There’s a whole variety of gizmos that are on the market or coming to market. We’ll talk about this some more, obviously, but yoking that, combining that with advances in AI and machine learning puts us in a position to really capitalize on the ability to measure brain activity in the real world at scale, thousands, maybe millions of people in a much wider array of activities than is possible in the laboratory. So that’s going to give us incredible new insights, I believe.
Bradlow: One of the things, as both of you know, that we asked you to do before this episode was to write a set of questions that I could ask you. And this episode is no different. As a matter of fact, Zab and I joked before you got here, Michael. She joked with me, “Who do you think wrote these questions, us or ChatGPT?” — which is a perfect segue to my first question. Either one of you can answer this. As a matter of fact, this is one of those times when I’m asking a question where I actually really, really don’t know the answer. What is organic intelligence, and how is this different from artificial intelligence? Zab, I’ll start with you. What is organic, artificial? What does that mean to you as someone who’s trained in neuroscience?
Johnson: I think it starts with just thinking about organic matter, right? In general, organic intelligence is used in the domain when it’s carbon-based.
Bradlow: That’s what I have.
Johnson: Yes, so that’s what you have, the wet, gooey stuff that’s inside. But also all of the animal kingdom has that. So, you can argue about intelligence and different metrics of intelligence, and we can talk about that later. But in general, it’s this idea that you have a nervous system that’s carbon-based that has a certain kind of behavior and integrates signals in a certain kind of chemical and electrical system that’s carbon-based. I think in opposition to what we classically think of as AI, AI is done in silica, right? That’s the way it has been so far. We’ll see if we start to grow artificially in the lab using wet stuff. That might change, and I think some people would argue that there is already still carbon involved. But I think the root of what people think of as this definition is based on actually having a nervous system, rather than not having a nervous system and doing it artificially.
Bradlow: Another interesting question that you guys put down is: What are some common traits shared between AI systems and the human brain? Or let me ask you another question. Should people who are building artificial intelligence systems today– do they have a team of neuroscientists working with them? Because in some ways, if one of the goals of artificial intelligence is to mimic human intelligence, shouldn’t we actually understand how the human brain works before we try to build systems that are trying to mimic some aspects of that?
Platt: Well, I think there are several questions here, which is what are the goals of AI, like in AI researchers? One might be to mimic the properties of human neurosystems, but maybe we can do it in a different way. And I think that’s what we’re seeing now. Maybe we can even go beyond the capabilities of human intelligence.
I think that there are a number of different commonalities between the two, and it’s kind of interesting when you think historically about where some of the basic algorithms and machine learning, like reinforcement learning came from. They actually had an origin in psychology and in neuroscience. So it has kind of been this really interesting feedback.
Bradlow: I never thought about that. Would what Pavlov did with his experiments, would those be considered reinforcement learning?
Platt: Those are the origins of reinforcement learning, and actually the basic reinforcement learning model was really written out here by a professor in psychology, Bob Rescorla, back in the 1960s and early ’70s. Penn has a really important part in the history of the development of AI. But for a long time, I think that we looked at the human brain or animal brains in general, and yes, reinforcement learning is important for learning to navigate and learning what’s good and what’s bad, what to approach, what to avoid. But maybe it didn’t go much deeper than that, and then there were these sorts of circuits. And this is certainly true, circuits that have prescribed functions that are sort of built in, if you will, kind of hard-wired. And the other thing is that there are constraints on neural function. OK, we’ve got a 3-pound device in our heads, 86 billion neurons, 100 trillion connections, but it’s actually pretty limited, honestly. And it’s limited by energetic constraints, which I think we’ll talk about in a bit.
Whereas AI, what are the limits? How big is the data warehouse? How many servers can you put underneath the hood of ChatGPT? I think the thought was that brute force type of approach in AI and machine learning couldn’t deliver the kinds of intelligent, creative thinking that human beings do. And in fact, that is what we’re seeing. And now when we look back at human brains, I think we’re starting to rethink that conceptualization, which is like actually our brains have a ton of experience under the hood. So evolution, and then from the time you’re an infant, and you’re being bombarded with all this information, and that’s a lot of time for that system to kind of learn, using similar principles like gradient descent, which is really at the root of AI.
Now when we go back and we look at what you might call neurons in an artificial neural network, and real neurons in a neural network that often have very similar properties, which they seem to have arrived at through processes like gradient descent.
Bradlow: Zab, let me ask you, how is artificial intelligence being used in your field of neuroscience today? I always say there are at least two sides to AI. One is the more traditional one, which is images can now be ingested by an engine, and data can now be output. For example, you could take a voxelized, picture a voxelized blood flow from the brain, jam it into an AI engine, and out could shoot a big, long vector of stuff. That’s kind of the more traditional.
The other could be more on the generative AI way, so any thoughts to our listeners about how AI is being used in neuroscience?
Johnson: Yes, it’s being used in so many different ways. I’m a visual neuroscientist, and I think that some of the beginnings, actually, of the way that cognitive science, vision science, visual neuroscience were coming together with AI and engineering happened really early. It was oftentimes through the neuroscientists that were thinking about vision, how we see, or how we can even train machines to see, that some of the very beginning of these algorithms emerged. And, actually, I think deep neural nets and CNNs, for example, were an outcropping actually from people in my discipline.
Bradlow: I remember, “Is that a cat?” You’d have some picture, an image with a bunch of pixels, and then it’s putting it into some neural net, some compression engine, and it’s got an encoder and a decoder. I agree with you. I think vision was probably one of the earliest ones that got people excited.
Johnson: Yes, and I think one of the really interesting things in that dialogue was that you could get to the same answer. You could actually make machines see, but it turned out to be fundamentally different from the way that the human does, right? So, I think that some of the discoveries that are happening now are how can it inform actually how we understand neural processing. One of the powers, I think, of AI is that it can seek patterns and even multiple answers to a single question that seems to push on the limit, at least, of single investigators or even teams. I think we’re about to learn much more. I think some of the most innovative work right now is happening where you can see communities of both AI researchers and neuroscientists back in dialogue with one another. There’s some recurrence in that conversation.
I think we’re in the very beginnings of seeing what the power will be, but I think to give you some concrete examples, a couple of researchers, Yukiyasu Kamitani, who is at Kyoto University, and Frank Tong, who is at Vanderbilt University, about 20 years ago, a little over 20 years ago, did some of the foundational work to look and see whether you could decode using algorithms, the information that people were seeing. That was sort of the beginning, and that was long before we had generative AI. We were just thinking about algorithms and the power of pattern detection.
Then quickly after that, Jack Gallant and Alex Huff — Alex is now at the University of Texas, Austin, and Jack, whose lab he was in at the time at UC Berkeley, were thinking about semantic meaning and thinking about the kinds of brain activity that came up with semantic meaning, but also visuals and movie decoding. They were starting to use algorithms to look at the patterns of brain activity to help decode, from a researcher standpoint, what people had actually seen. And Kamitani then did this really interesting thing where he actually decoded dreams.
One of the interesting aspects of that work is telling you something about that maybe people actually have a really hard time reporting, right? Or it’s impossible to report, but another kind of imagination or visualization.
Bradlow: Michael, let me ask you a follow-up question to what Zab said. I tend to be– and maybe this is why I’m not a basic scientist. Well, I’m a scientist. I’m a basic scientist, sort of. I tend to believe things that are of low dimensional representation, but maybe the world is really complex. Zab mentioned something about pattern recognition. How much of the future of what we’re going to learn in neuroscience is because we can take this very high dimensional, or let’s call it three-dimensional time series data, put it into some AI engine, and we’re going to notice some 86-dimensional interaction that no human could possibly find. Is the world built that way, with 86 dimensional interactions? Or is it like, “No, if you understood these, I’ll make it up.” I’m literally making it up. And you’re going to correct all of my vocabulary.
If these neurons or voxelized areas, if I put them into the right bracket, then it really is just a three-dimensional, four-dimensional thing. What are we going to learn from AI?
Platt: Wow, that’s a deep, big question, Eric.
Bradlow: Well, we have six to eight hours here on this episode, so please, elaborate.
Platt: You know I can talk for a long time. I know you can. I have sort of two answers to that. I think that on the one hand, in the applied sense– maybe it doesn’t even matter. But what it does allow us to do, and there are three striking examples of this, kind of following up on what Zab talked about. This year, three major discoveries, publications, whatever you want to talk about, so you’re taking this very high dimensional data, and you’re reducing it, and you’re turning it into something useful.
There was one study that was the culmination of decades of work by a group at Lausanne that basically took a guy, this gentleman who had a spinal cord injury. He had been paralyzed for a couple of decades. You take the data out of the brain. You feed it through an algorithm, a machine learning algorithm. And now, rather than trying to actuate a robotic exoskeleton or something like that, you actually pipe it back into the spinal cord, beneath the site of the injury, and now the guy can walk.
Bradlow: Yes, I read that article.
Platt: He can really walk. Breathtaking. Who knows how it’s working, what is really happening? Does it tell you how the brain works? Not necessarily, but it’s incredibly useful. Similarly for work, out of Eddie Chang’s group at UCSF, allowing a woman who has been unable to speak for a long time, more than a decade, due to a stroke, to actually have a conversation in real time with her husband.
Speech is generally taken to be the most important aspect of being a human being. Those are the parts of the brain you want to avoid injuring in any kind of surgery. And now she can actually have a conversation that is meaningful. And then another decoding one, kind of building on what Zab talked about, an fMRI study– and fMRI, let’s appreciate, it’s not a great technology. It’s slow. It’s sluggish. It relies on blood flow. Not very precise, right? But in that study, the scientists were able to decode what a person was reading — not word-for-word, but the gist, which I think is really interesting; not from language areas, but from all over the brain. And they could then decode what they were thinking. Now, it’s idiosyncratic to each individual. It couldn’t take my brain decoder and put it on you. It probably wouldn’t work, although we share a lot of similarities.
Those are ways in which the data reduction dimensionality, dealing with all that complexity, is just helpful, and it’s just useful, right? But I think in other ways, some of the work that we’ve done, and we have a paper in review right now that records — which we recorded data from thousands of neurons in monkeys. And these monkeys, rather than being engaged in a task, they’re actually just doing monkey stuff with each other. They’re engaged in totally natural behavior. They engage in like 27 different behaviors. Usually in any kind of experiment, it’s like one or two different things. The data on the face of it looked kind of– if you looked neuron by neuron, which is what you would typically do, it looked kind of boring and didn’t tell us that much.
If you take all that data together, the pattern of data, and that’s 1,000-some dimensional data, and you do something. We used YouMap, which is one way of dimensionality reduction, and you pack that into three dimensions. Suddenly, all of that population data clusters into distinct– all those 27 different behaviors. And not just that, but who you’re doing it with, and who is next to you, and what’s going on?
Bradlow: That was exactly my next question. Can you take these things and just smash it down to a small number of dimensions?
Platt: You can. And it’s kind of shocking, but also pretty amazing. I think that maybe what that tells us is that even when dealing with the complexity of the world, which is rich and complex, that the brain finds very efficient answers, right? It has had hundreds of millions of years of opportunity to do that, and I think that that’s what we’re seeing.
Bradlow: Maybe in the last couple of minutes we have, can you tell me about the applications? Michael, I know you teach a course in Brain Science for Business. That may not be the exact title, but it’s probably pretty close. And Zab, I know you teach a course in Visual Marketing. That one I know is exactly right. Could you guys give me a sense of the big application areas of neuroscience/AI in business today that you are seeing? What are the ones that excite you the most?
Platt: Well, my course is really a sort of gateway course. Think of it that way. It’s sort of soup to nuts, everything you need to know, but also– and somewhat idiosyncratic. What are all the different application areas where I think neuroscience either is already having an impact or will have an impact?
Where it’s already having an impact is in marketing brand strategy. That is, at this point, a no-brainer. If you are not collecting neurodata of some sort, you’re leaving high-quality data on the floor. You would make better ads. You would develop better brands. You’d position them better. We demonstrate that over and over and over. You can turn that crank and just do a much better job, throw away less money.
The areas where I think things start to get really interesting and exciting are in places like HR and management, where a more precise scientific objective understanding of people, their individual talents, traits, and motivators, and what it takes to be really good at, for example, whatever job that they are aspiring to can help to make that match and identify the training, development, et cetera, that can be done to get you from here to there.
We know companies waste tons of money, tons of time on this. Churn is huge. That’s frustrating for employees. They’re unhappy. I think it’s an opportunity for a win/win. The same thing for teams, right? What you can do for individuals is more complicated than teams, but we can absolutely do that, too.
Bradlow: Zab, I know you guys have an upcoming conference. My guess is this podcast will not necessarily go up before it, but people can obviously– there will results from it. There will be video from it. There will probably be summaries of it, all on the Wharton Neuroscience Initiative website. What’s happening at the upcoming conference? It’s happening this exact Friday.
Johnson: It is. It’s Friday, Nov. 3, and this year’s theme is on brain capital, thinking about all of the cognitive skills. And we think about that as emotion and creativity and what I think is classically thought of as cognition. But what’s necessary, actually, to equip people to be productive members across the entire lifespan? And to think about the ways that we can maybe devise strategies, economic strategies, to really leverage that — sort of like a lunar mission, but now thinking about cognition.
There is one segment of the day’s programming that’s really taking a deep dive into this idea that AI and human interaction is coming fast, and that this is really a moment to seize and think about both an ethical, but also an optimization of what those new, teaming structures are going to look like. How can we really equip the human agent to think about ways to do and live better, given this new role of AI that’s coming? But we’re also diving into other aspects of brain development in children and in aging, thinking about really the brain capital across the lifespan and how even early childhood ensures better cognitive endpoints, more productivity across the entire lifespan, which will help economic and productivity and businesses thrive, and individuals to thrive. And probably build better protections for mental illness and mental health deficiencies.
Bradlow: Well Michael, in the last 30 seconds or so that we have, if we were all sitting here 10 years from now, what are we looking back and saying? Whether it’s the intersection of AI and neuroscience, or AI and humans, what kinds of problems do you think get solved in the next 10 years that just we were not capable? Whether it’s, as you said, because of more data, better servers. What are the big frontiers in your world over the next 10 years?
Platt: I think that we are going to see hopefully a lot of today’s issues where brains go awry, right? Whether that’s depression, whether that’s neurodegenerative disorders, whether that’s the sort of despair that we see across the population, that we’re going to make significant advances in that. A lot of it is going to be technology-driven. AI is going to be a huge force for good, I think, in this, in terms of helping us to come up with more creative ideas and help us select amongst those ideas. And then really important for translating them into real solutions, too.
Bradlow: Well, I’m getting older by the minute, so I’m counting on you. I’m counting on you both. This has been the AI and Neuroscience edition of the AI podcast series here at the Wharton School. Again, I’m Eric Bradlow, professor of marketing and statistics, and vice dean of analytics. I’d like to thank my guests, Michael Platt and Zab Johnson, for our episode today.
Johnson: Thank you.
Platt: Thanks, Eric.