Wharton professor Bob Meyer and Wacai co-founder and president Roger Gu join Eric Bradlow, vice dean of Analytics at Wharton, to discuss how AI is changing industries and organizations, touching on real-life applications like robo-advisors, customer engagement, marketing, and more. This interview is part of a special 10-part series called “AI in Focus.”

Watch the video or read the full transcript below.


Eric Bradlow:  Welcome to the next installment of the Analytics at Wharton series, focused on artificial intelligence. I’m Eric Bradlow, professor of marketing and statistics here at the Wharton School. I’m also vice dean of Analytics at Wharton.

We think one of the important applications and areas that we, as a business school, should focus on is what we’re calling today’s session: AI in Action. I can think of no two better people to speak to us about this than number one, my colleague Bob Meyer, who’ll be joining us. Bob is the Frederick H. Ecker MetLife Insurance Professor. He’s also the co-director of the Wharton Impact of Technology Initiative. So first Bob, welcome to our podcast.

Robert Meyer:  Thank you. It’s good to be here.

Bradlow:  And then second is Mr. Roger Gu. Roger is the co-founder and president of Wacai, a prominent independent mobile-based platform for comprehensive wealth management in China. Roger has a career spanning multiple decades, both in the United States and in China. He also serves — and I’m very proud to have Roger both as a friend, but as a valued member of the Analytics at Wharton Advisory Board. So Roger, welcome to our podcast.

Roger Gu:  Thanks for having me, Eric.

Bradlow:  Bob, let me start with you. You do a lot of work on the impact of technology, let’s call it broadly defined on human behavior. And of course, employees are humans, too. So could you talk about, in your perspective, some interesting uses of artificial intelligence today in companies, either by employees themselves or by firms and the impact that you think it’s having?

Meyer:  In many respects, when did it all start, or how long have we been using it? I think you actually have to go back decades, where different kinds of artificial intelligence have basically been an integral part of companies forever. I remember I liked to say I was on the ground floor. When I first started my career, I was at Carnegie Mellon University, and I used to play poker with some people from the Computer Science Department. One of their complaints was that they had to walk all the way down the hallway to find out what was in the Coke machine because they would go down there and find out it was empty. So basically they programmed one of the very first uses of FRID technology to program the Coke machine so they could sit at their computers and find out whether it was empty or not, or if it was worth the trip.

Of course today, how many internet-connected devices there are is about three times the world’s population, so it has basically been integrated throughout every single function of a business, particularly in manufacturing, consumer use, and so forth. I think some of the things that are happening today is we’re shifting from artificial intelligence as a way of looking up information, processing data, to actually generating new knowledge. And so one of the challenges, I think, for a lot of employees, if you’re, say, in advertising — am I no longer going to be needed to generate advertising copy when I can just throw it into ChatGPT, and it will generate the advertisement?

Bradlow:  I think one of the big areas that we’ve talked about on this series is what I say, and I agree, AI has been around for a long time, as I was a Ph.D. student almost 30 years ago. Predictive AI and the use of AI as a data source, whether it’s computer vision, sound, et cetera — that’s been around. As you pointed out, it’s the generative AI part that has people really excited today.

Meyer:  Right.

Bradlow:  So Roger, you have an actual company, Wacai, that does work in this area. Could you elaborate on some specific areas where your organization is implementing AI today?

Gu:  Yes, sure. Initially we are starting with, as you said, in predictive AI because in our platform, people put their financial accounting information, like their bank accounts, credit cards, insurance, retirement plans. So we have a lot of what’s called “structured data.” We would use the data to improve our user experience and also help our partners to sell their products. But as time moves on, we get unstructured data, like a voice, an image, as you mentioned. In the last few years, it is generative AI that came along.

It’s quite interesting. We launched a business called Online Financial Literacy Program about three years ago, when people were locked down during the COVID period, when the lights of eCommerce went on like crazy. So we launched this Online Financial Literacy, and so far we have about 3 million people who paid for our wealth management courses.

The large language model and GPT 3, I think, emerged about two years ago, and we took notice. It was not until last November, when this ChatGPT, GPT 3.5 came along — it hit us at the right time because back then, we were heading to the right bottleneck of finding we could not hire our teaching assistants fast enough and train them fast enough. Then we turned on for the models. Generative AI helped quite a bit. One, it is the kind of helping hands to interactions, by doing it more customized, more deep in the far reaches, where there is automatic content generation.

So we developed a roboTA, and what it does is this traditional kind of instruction, exercise, test cycle. It tracks along with each individual student, and when the test is done, it would tell the people what they did wrong and help them to review the content. The content is tailor-made individually on each knowledge point that they have missed. So this PowerPoint is generated on the spot, and the people can do it interactively until they click the “I Understand” button. So it has been very helpful.

Also we have daily financial news, kind of an analysis program. And today, I think more than 90% of the content is generated by this large language model, generative AI. Our research analysts only do a very brief review, and then click on the “Publish” button, so it does help us quite a lot. I think over 80% of interactions are performed by AI today, rather than human beings.

Bradlow:  Bob, could you talk, given the center you run now, where you’re one of co-directors of AI at Wharton, but it was also the Wharton Impact of Technology Initiative. How do humans, whether it’s learners as in the case Roger was talking about, employees, respondents in studies — how do they tend to respond when they know something? Maybe they know — we’ll get back to Roger in a second — whether they know it’s generated by an AI engine. How do consumers tend to respond to the difference between the two?

Meyer:  Yes, that’s very interesting. There has been an increasing amount of work naturally on that. I think one of the challenges, of course, right now, there was a time when you could tell whether or not something was — you were interacting, for example, in a text interaction with a service person where you knew you were dealing with a robot. And people didn’t like that. Now, it’s very, very difficult to tell. One of the issues in online advertising is deep fakes, where basically you cannot tell the difference between it. So that sort of represents an ethical issue. And certainly, as you might expect, people don’t like it when they think that they’re being fooled, so there’s some evidence of that.

Another area that we’re looking into — I have a colleague that’s looking into — is one of the things that generative AI is doing is that it’s synthesizing information and offering summaries and advice in tasked domains where you used to do it manually. So for example, if you needed to know something about a topic, you would go to Google, and you would go through bunches of sites, and you were left to do the synthesis yourself and form your own conclusion.

Now you can just go to, whether it’s Bing or ChatGPT, ask a question, “How do you do this?” “What’s your best advice for this?” And it will eventually do all that work for you, synthesize it, and give you an answer. What we’re finding, and it’s early in the process, but basically people don’t necessarily like that all that much. In some sense it’s a thing where you feel that you have less ownership over it. And so right now one of the big, outstanding questions is how much intelligence do people want? The reality is, I guess just the same way that you don’t necessarily want to order all your meals out, sometimes you want to cook them yourself. In a lot of cases, there are going to be some domains where people are going to be more trusting and feel more ownership if they’re gathering the information themselves, rather than having a computer do it, even if it’s the case that the computer advice actually may be a little bit better.

Bradlow:  So Roger, Bob’s response is a perfect segue to my next question to you. As the president of a large company that’s impacting millions of not only learners, but also investors, people doing their private wealth management, how do you decide what to assign to an AI engine, whether it’s an AI chatbot or an AI automatic grader or an AI engine that gives feedback, and what do you leave to humans? Is it purely one of scale? Is it one of finance? How do you think about what to assign to whom?

Gu:  I think it’s in practice. It’s an emerging process. It’s the grayscale kind of segmentation. We do have a lot of content generated by AI, and people would know it’s AI. For example, it’s a Q&A session. When asked general questions, or if AI is asked questions, they know they are dealing with AI. That’s kind of level 1. But level 2, people would think they are chatting with a live person, but actually this live person is largely assisted by the AI. For example, a piece of use analysis. Actually our research admins are not that powerful in gathering so much information on time. They are really empowered by the AI, and at the highest level. We have people upgrade to the ultimate level. They become a member. They pay a membership fee for our advisory services, and they would appreciate not only the assistance with AI, though it definitely helps a lot, but I don’t think at this stage, AI can completely replace human beings….

I think it’s a process that’s emerging, ongoing, but looking back, in less than a year, it has made tremendous progress.

Bradlow:  So Bob, we always try to say, “It’s not humans or AI, it’s humans and AI. How do you see that partnership evolving? And also you could imagine if I was an employee, and I was being strategic, if in some sense I proved to the company that AI can replace me, that may not be great, either. So how do you see those two interacting?

Meyer:  I think Roger is spot-on in saying that the real challenge is how do you figure what is their optimal blend? What are those things? And a couple of months ago, we ran a generative AI conference out in San Francisco, and one of the big topics there was to try to figure out how good is the best generation AI in generating creative solutions to problems. And it seems to be the case that the emerging consensus is that what it does do is if you let ChatGPT work on a creative solution to a problem, what it does is it’s much better at bringing up the low end of human ability. So basically if people are not creative and not good problem-solvers, you definitely want the machines stepping in, because they do a much better job.

On the other hand, what it does do is it tends to make solutions seem very similar that it comes up with. And basically there’s the high end. There are people who are particularly skilled in problem-solving, and in that case, what will happen is you’ll compare the best of the human judgments, and those tend to be better, as judged by outside people, than ChatGPT solutions. So the task is that you — I think consistent with what Roger is talking about — you want to identify people within the organization who really do have these very special creativity skills and so forth, and you want to let them free and maybe work with tools, but basically you don’t want to replace that.

Bradlow:  It’s actually an interesting theory, which I’m sure people will test over time, which is, does this, if you’d like, AI engine actually inhibit creativity on the part of the humans? It’s why I always say the last thing — I always tell this to Ph.D. students, and then Roger, I have another question for you. The last thing I do when I try to come up with a creative idea is want to read someone else’s paper or synopsis paper, because it does not help me come up with a creative idea at all.

So Roger, let me ask you: Robo advisory services seem maybe to me a much higher-stakes type of decision than someone trying to learn financial literacy. So how do you think — I’ll just use the language that we use in academia all the time — there’s a very different loss function between an AI engine making some portfolio allocation or recommendation to me, and then I go bankrupt as a result. Or I take some sort of literacy test, and the AI engine gets it wrong, and I’m like, “All right, that’s not great, but it’s not the end of the world.” How do you think about — Bob was talking about what’s called “employee heterogeneity.” How do you think about task importance heterogeneity and the role of AI, given high-stakes versus low-stakes types of decisions?

Gu:  If I take the robo advisory, that’s high-stake. It’s interesting, this new phenomenon called generative AI tends to be sometimes very creative, especially GPT. Sometimes it’s imaginary, while number two, it tends to be more specific. This robo advisory has two sides. One side is the customer profile. The other side are the assets, understanding asset characteristics. So on the right hand side, asset characteristics, we want to be very careful in terms of AI. It’s still classical AI, you know, classical statistics, partial differentiation, some sort of recursive neural network, just understand that the sigmas, that the betas, the gammas — make sure that part of the efficiency frontier is still hooked in. But the left hand side, the customer needs, the life objectives can be very suggestive because in the past, the classical way, doing these sorts of advisory services, used to ask people about your personal asset liability, income, how many kids you have, when are they going to school? When do you want to retire? Then come up with tailor-made solutions.

But now with generative AI, this can go deeper. You can have conversations about live objectives. When is, of course, retirement? You also have, for example, your wedding anniversary. But then the advisory for a wedding anniversary, the risk profile at the time can be very different from the 20-year retirement plan. So wedding anniversary, if you take on a risk like to play with derivatives or leveraged trading. If it works well, you go on skiing in Switzerland. If it does not work, you always have Disneyland in Florida.

So I think with the help of AI, this can go a lot deeper, a lot more creative…. I think it definitely adds value, and you need to understand where to use it correctly, in a prudent fashion.

Meyer:  Roger, I have a question for you. Do you ever worry a little bit, particularly in the context of financial advising, that the tool becomes so good that people maybe put too much trust in it? So for example, if you’re dealing with a human advisor, you know you’re dealing with a human, and you know that humans are fallible. And so if a human advisor basically tells you this is what you should be doing with your money, and this is what you should be doing for retirement, you’ll follow that advice, but basically you know it’s potentially somewhat fallible.

But on the other hand, if you have a very advanced computer tool, which is — even if you sell it as being optimized based on whatever — is there a worry that people would then go to the other extreme of being too trusting of it?

Gu:  Well, Bob, I’m not worried. Actually, people believe what they believe. They can believe a real person. They can believe in an algorithm, which is also humanized and has content generation. If the results are good, then they believe in it. So the beauty here is you don’t have one key opinion, you have dozens because even different AI has different characteristics…. What I do worry about is actually the regulatory pieces because generative AI, like I said before, tends to be more creative. But there are rules and regulations to what you can say and what you cannot say. For example, with certain kinds of licenses, you’re not allowed to recommend individual stocks.

Or maybe you are not allowed to recommend something to people, not at this risk rate. So this part, you’ve got to be very careful. If you ever cross the line, it is actually the companies, the people behind this AI who are held responsible. So I want to be very careful about that. But then fortunately the lines are not very clear given the regulators because it’s a new thing for them, as well.

Bradlow:  So Bob, you and I are both obviously marketing professors. You had mentioned about, let’s call it “ad creation” is one area. What do you see — and then I’m going to ask Roger about his area in fintech — but what do you see as the major application areas that you see, not just us as academics studying, but that you see AI is going to be used in our home field of marketing?

Meyer:  I think you had mentioned the cases of — there are two different cases of it. Certainly sort of predictive AI, that’s been around forever, and product design and so forth. And certainly in the case of every time you go to Amazon, you’re seeing AI. You’re basically seeing products recommended for you, and that’s been around for a while. And so presumably you’ll see improvements on that. You’re going to be seeing better customizations. When you go there, you go, “Whoa, how did it know that’s exactly what I wanted?” And some people may find that scary, but other people might find, “That’s exactly the product I want.”

The other side, in terms of the creativity part, that’s sort of a little bit we don’t quite know yet, in terms of whether or not if we’ve officially fully turned over all of the creative process and advertising design, creative process and strategy formulation. The potential is certainly there, for example, with large language models and generative AI in whole to basically generate all of this. Now, whether or not that ends up being the type of thing which is going to generate that out-of-the-box type of ad campaign which really makes a difference for a company, as opposed to whether or not it generates a whole bunch of advertising which is all the same, because effectively it’s just sort of discovering the mass, rather than the tail. And usually we like to focus more on that breakthrough, creative idea. I’m not so sure — the verdict is very much out as to whether or not generative AI can produce that kind of a breakthrough.

Bradlow:  And Roger, how about in fintech? What do you see as the biggest uses of AI today?

Gu:  Just to follow on, when you guys were discussing the marketing, I’m actually very optimistic, just like it helped discover new DNA patterns, new drugs. And also in Apple Go, AI discovered new ways of playing Go….

So every week or every point of time, usually we have what we call marketing plans going out. And which ones are going to stick? You don’t know. But with generative AI, you can create those creatives very easily. It’s like dispersing different patterns to afterwards to figure out the DNAs from those marketing plans. Maybe it is the way you bring up a pet versus a young boy. Maybe it is the feeling of a retirement age, something like that, but the underlying — I call it DNAs of marketing materials — now can be sort of tried and rated and discovered very well.

And in this whole space of digital marketing, I think it takes it to the next level. So I think that’s on the one hand, because every company needs to acquire customers and to manage customer experience. I think AI could help a lot. I think furthermore, because this ChatGPT, this generative AI, a large language model, you’ve heard that it’s not as good as doing numerical calculations. But now with these plug-ins, you can actually combine those tools with the traditional AI, and actually even on the asset management side, I can see new AI-powered trading algorithms, very different from the traditional ones. And they’re competing on a par with the kind of traditional hedge fund programs. So I think it’s a big thing that’s coming. And right now, we’re only scratching the surface.

Bradlow:  We only have about a minute left, so Bob, maybe in 30 seconds or so, if we’re sitting here ten years from now, what have we been talking about? What are we going to be talking about that has happened over the last ten years?

Meyer:  That’s an awesome question. I have no idea whatsoever, and I think from my perspective, as a researcher, this is the most exciting time to be alive because basically what we’re on the precipice of is just really very fundamental transformations as to how people get information, how people generate information. And we’re just beginning to understand how this is affecting society. So to me, there is just so much that we have to learn as researchers going forward, so it’s an awesome time to be here.

Bradlow:  And Roger, from your point of view, how do you think about the business world, investing — what do you think are going to be the big, breakthrough issues in AI in the next few years?

Gu:  Well, I think companies have to guard their position in this big AI game. It’s like a big tree, right? You have open AI, Google, and those guys are the roots, right? And you have maybe fintech players like us…. They are like the trunks. And there are many, many leaf applications. So I think whether you are a young entrepreneur, right into the game, or whether you are an established company, it’s very important to understand the technology trends in figuring out your position in your field. Keeping an open mind will be very, very helpful. Things will change, and we have to adapt.