If the technology industry can boast a true renaissance man, it is Jaron Lanier. A polymathic computer scientist, composer, visual artist and author with a mass of dreadlocks tumbling down his back, the 53-year-old Lanier first found fame by popularizing the term “virtual reality” (VR) in the early 1980s and founding VPL Research to develop VR products. He led the team that developed the first multi-person virtual worlds using head-mounted displays, the first VR avatars and the first software platform for immersive VR applications. The company’s patent portfolio was eventually sold to Sun Microsystems.

From 1997 to 2001, Lanier was chief scientist of Advanced Network and Services, and served as lead scientist of the National Tele-immersion Initiative, a coalition of research universities that in 2000 demonstrated the first prototypes of tele-immersion. From 2001 to 2004 he was visiting scientist at Silicon Graphics, where he further worked on telepresence and tele-immersion. He was scholar at large at Microsoft from 2006 to 2009, and has been a partner architect at Microsoft Research since then.

Today, Lanier is best known as the author of two influential books on the future of the digital world in which we live: 2010’s best-selling You Are Not a Gadget, and the recently released Who Owns the Future? Both have garnered rave reviews (You Are Not a Gadget was named one of the ten best books of 2010 by The New York Times) and earned Lanier a reputation as the technology industry’s conscience — a role that has won him friends and foes in equal measure among the digerati. Lanier’s concern, at its simplest, is that that we are building a digital future in which a small number of companies and individuals garner great wealth, while the economy as a whole starts to shrink, taking with it jobs, the middle classes and what he terms “economic dignity.” Lanier believes we can avoid that fate, but only if we start remunerating people for the digital assets (from personal data to designs for use by 3D printers) that we currently give away for nothing. Information may want to be free, but Lanier believes the world would be better served if it was affordable instead.

Lanier is far more than a scientist. He owns (and plays) one of the world’s largest collections of rare and ancient musical instruments, and often demonstrates them during his many speeches. His “Symphony for Amelia” premiered in October 2010 with the Bach Festival Orchestra of Winter Park, Florida — one of numerous commissions over the past couple of decades. Lanier also pioneered the use of VR in musical stage performance with his band Chromatophoria, which has played venues such as the Montreux Jazz Festival; in addition, he has performed with artists as diverse as Yoko Ono, Philip Glass, Ornette Coleman, Terry Riley and Funkadelic’s George Clinton. Lanier’s paintings and drawings have been exhibited in museums and galleries in the U.S. and Europe, with his first one-man show taking place at the Danish Museum for Modern Art in Roskilde in 1997. He also helped dream up the gadgets and scenarios for Steven Spielberg’s 2002 science-fiction movie Minority Report.

Knowledge at Wharton spoke with Lanier about who he believes will own the future at Microsoft’s headquarters in Redmond, WA, where he was visiting from his base in Northern California.

An edited transcript of that conversation follows.

Knowledge at Wharton: You laid the groundwork for Who Owns the Future? in your previous book, You Are Not a Gadget. What persuaded you of the need for a second book?

Lanier: What persuaded me of the need for a second book is the discordance between what I find empirically in the world and the ideas about policy that everyone seems to return to as if there’s no alternative. We’re locked into a continued belief that investing in a particular kind of information-technology venture is good for society, when actually these often seem to be pulling society apart and creating ever more extreme income inequalities. We seem unable to connect the dots between the continued dysfunction of the financial sector, even as it expands profitability, and the rise of information technology.

What I had written about this topic before was a bit more impressionistic, and even the new book is still a relatively early approach to a more complete understanding of these topics, but I felt I needed to at least take another step toward trying to interpret how particular digital trends are impacting our economy, society and politics. But as with the previous book, I think I raised yet more questions that will require yet more work. I don’t think it’s complete.

Knowledge at Wharton: Why is it that we all appear so willing to give away the hardest currency of the information economy — our personal data — either for free or in exchange for digital trivia?

Lanier: People are relatively willing to accept suggestions so long as what they are asked to do is relatively easy and pleasant. I think technologists could just as well have proposed a different model of the digital economy, and it would have been accepted just as well. I don’t think there’s anything particularly significant about people being willing to give something a try.

The challenge is that people still don’t understand the value of their data. They have been infused with the idea that the ubiquitous fashionable arrangement, wherein you obtain free services or so-called bargains in exchange for personal data, is a fair trade. But it isn’t, because you’re not a first-class participant in the transaction. By first-class participant, I mean a party to a negotiation where everyone has roughly the same ability to bargain, so that when they do bargain, the result is a fair transaction in an open market economy. But if you’re in a structurally subordinate position, from which you have to accept whatever is offered, then you give much greater latitude and power to whoever has your data than you get in exchange.

I’m currently planning a research project to better understand the value of data. There are many ways to do this, but one is to look at loyalty cards, frequent flier memberships, and the like. There can be an argument about the difference between gathering intelligence about customers versus locking them in, but I argue the two benefits are deeply similar. The differential between using a loyalty card and not using the card for a given person in the course of a year is a measure of one small portion of the value of that person’s information. I strongly suspect that once we measure the cumulative value of personal data using techniques like this, we’ll see that it’s getting more valuable each year. I don’t know for sure, but that’s what I hypothesize.

If I’m right, then one interesting question is, will the value of information from a typical person ever transcend the poverty line? I think we’re headed towards that point. And if that does happen, then we have the potential for a new kind of society that escapes the bounds of the old debates between “Left” and “Right.” Instead, there could be an entirely new sort of more complete market that actually creates stable social security in an organic way. The possibility fascinates me. It is not irrational to imagine this future.

Knowledge at Wharton: How much blame for data inequities do you ascribe to social media?

Lanier: I don’t think social media per se is to blame — rather it’s the use of the consumer-facing Internet to achieve a kind of extreme income concentration in a way that’s similar to what has happened in finance during the last 20-25 years. I don’t think there was any evil scheme in Silicon Valley to make this happen. For example, Google didn’t have any roadmap from the start that said, wow, if we collect everybody’s personal information we can gradually be in a position to tax the world for access to transactions, or have an ability to manipulate outcomes using the power of statistical calculations on big data.

But in a sense, the basic scheme was foretold at the start of computing, when terms such as cybernetics were used more often, and people like Norbert Wiener were discussing the potential of information systems to let people manipulate each other, and how this could create power imbalances. Silicon Valley rediscovered these old thoughts, but in a way that was too immediately profitable to allow for self-reflection. You see a very similar pattern emerging in Silicon Valley to what happened in finance: the use of large-scale computation to gather enough data to gain an information advantage over whoever has a lesser computer.

I don’t know if you’ve read the early e-mails of [Facebook CEO] Mark Zuckerberg from when he was a student starting out, where he says he just can’t believe that people are giving him all their information. He seems utterly astonished that people are doing it. The reason his fellow students did it was simply that on a first pass most people are pretty trusting and good-natured and pretty game to try things. But what happens with social media is that they quickly become subject to a vaguely “blackmail-like” cycle that keeps them engaged and locked in, because if you don’t play the game intensely on Facebook, your reputation is at stake. I’m astonished at the energy people put into basically addressing this fear that the way other people perceive them, their being in the world, how they might be remembered, will be undermined unless they put all this labor into interacting.

Knowledge at Wharton: The early years of any economic transition are often accompanied by structural discontinuities, but in the case of data-driven economies, they seem particularly profound. Why is that?

Lanier: Because the network-effect aspect overwhelms everything else, and network effects can build very quickly. The tulip mania of the 1600s happened fast and crashed fast, but in a world of digital networking, things happen even faster, yet the data doesn’t wilt away like a tulip. Network lock-in is persistent. Markets don’t rebalance themselves remotely as quickly or easily as in earlier times when they are disrupted by modern digital entrepreneurs.

Another problem is that you can’t automate common sense — but if you have a lot of data on a network and you use statistical algorithms to process it, you will create an illusion of having done just that. It’s an easy illusion to create.

We don’t currently have a complete enough scientific understanding of how the brain works or how common sense works. But we talk as if we do. We’re always talking about how we’ve implemented artificial intelligence or how we have created a so-called smart algorithm. But really we’re kidding ourselves. This is very hard for people who run big computers to admit. They always treasure the illusion that they are working with completed science — which they aren’t — and that they have already attained cosmic mastery of all possible cognition.

The mistake that happens again and again in these systems is that somebody believes they have the ultimate crystal ball because their statistical correlation algorithms are predictive, which they are, to some extent, for the very simple reason that the world isn’t entirely chaotic and random. Just as a matter of definition, big data algorithms will be predictive, and the bigger your computer and the more fine-tuned your feedback system, the more predictive they will be.

But, also by definition, you will inevitably hit a stage where statistics fail to predict change, because they don’t represent causal structure. That’s the point at which companies such as Long-Term Capital Management fail, mortgage schemes fail, high-frequency trading fails — and that’s where companies such as Google and Facebook would fail if we gave them a chance. However, if you construct an entire society around supporting the illusion that the falsehood of ultimate cognition is actually true, then you can sustain the illusion even longer. But eventually it will collapse.

The core problem is this idea of automating our own thought, our own responsibility, so we don’t have to take it anymore. That’s the core problem of this way of using computing. It doesn’t mean there’s anything wrong with networks or computing. Fundamentally, it just means that this is a particularly bad way of using them that’s highly seductive in the short term, and it’s a pattern we fall into again and again.

Knowledge at Wharton: If the “siren servers” you describe dash many of our jobs against the rocks, what future do you see for data-driven economies, say, 20 years down the road?

Lanier: I think a data-driven economy could be a wonderful thing. The primary missing ingredient that would make it wonderful is to have more complete and honest accounting. That single change could create a data-driven economy that would be both sustainable and creative.

Right now we don’t have that. For example, when you translate a document automatically using, say, Google or Microsoft, it seems like this magic thing. Somebody can translate my document between languages and it’s free, so isn’t that great? But the truth is it’s never like that. There are no nonhuman sources of value to plug into digital networks. There are no angels or aliens showing up to fill the network with the bits that make it function.

What actually happens is that the companies that do automatic translation have scraped the Internet for preexisting translations. The result is just a statistical pastiche of preexisting translations done by real people. There are all these real people behind the curtain, and they aren’t being paid for their work.

Now, you might say that if you sent each of those people a few pennies for their contribution to the corpus of data, it would just be a negligible amount, and that would be true for any given one. But if you count up all the different auto-translators and all the different times that they happen, there would be many, many thousands of small transactions. And that would add up to something significant that would reflect the actual value human translators had contributed. In a sense, we would initiate a universal royalty scheme.

There are many, many questions that this opens up, and understandably, many people are skeptical that it would be worth it or even plausible to track all of these chains of value. But in terms of technology, it can absolutely be done. Some of the people who worry it would be too complicated have no idea how bizarrely ad hoc and complex what we already do today is. A complete information economy with honest accounting would actually be a simplification, and it would be more efficient than what we do today. The engineering doesn’t scare me, and the cost doesn’t scare me. What is very difficult to describe, and the puzzle I’m most challenged by is: what’s the scenario that would transition us from the system we already have, where there’s a huge imbalance, to a system along the lines I’ve just described?

I suspect the most likely scenario is that some new platform for network value will come along — and clearly many new ones will come along. Maybe it could be 3D printing, given that the number of companies in that industry is still quite small. Perhaps they could get together and say, hey, let’s see what happens if people are paid for their 3D designs instead of adopting the Linux model, which is what happens today. If that worked out, and a lot of people were doing pretty well from creating designs for use on 3D printers and value of the industry seemed to be soaring as a result, maybe that model would be emulated by other sectors. In my book I mention other possible future platforms that could present this opportunity, such as networked artificial glands.

There are other mechanisms we can imagine. One of the things I’ve been talking to colleagues like Noam Nisan about lately is trying to figure out what very efficient, constant, incremental, large-scale collective bargaining on a digital network would be like. If there are a bunch of people contributing to a corpus, they would set their price to some sort of median rather than a race to the bottom. I believe mechanisms like that can be brought into existence.

Knowledge at Wharton: Do you think part of the problem is that many of us have already given away the crown jewels, at least in terms of our personal data?

Lanier: The transition issue is by far the hardest one. But, you know, there have been transitions in the past, and I think we can achieve transitions in the future.

I’ll give you one example from American history. Originally the American west was viewed as a place where land was free, although that free land was often sort of scammy in the sense that you would only have access to it through a monopolized railroad system. But it bears some similarity to the situation today, and there was a similar romance about it. Everyone ultimately became content that we moved away from free land and monopolized railroads to more of a real economy where more people function as first-class citizens in transactions with each other. So the maturing of the American west is a useful model.

There’s a movie about this that I liked called The Man Who Shot Liberty Valance, a 1962 Western starring James Stewart and John Wayne. It’s about moving from one regime to another — one in which everybody is sort of free, but which favors gunslingers and barons, to one based on justice under the law that works better for everybody. I think the analogy is reasonable.

Equally, there are many transitions that have gone badly, and the model I really dislike is that of outright revolution, because the problem with it is that it’s kind of random what will come next. You know that you will break a lot of stuff, but you don’t know whether you will end up with a system that’s better or worse.

Among technical people, there’s a lot of talk of revolution — everybody wants to be god-like. Everybody wants to be the one who reforms the world, to be the one who oversees the singularity. It reminds me a lot of the people who once said they would oversee the Marxist revolution, and that everything would turn out alright. The problem is that you imagine you will be in control but you won’t. Probably all that will happen is you will hurt a lot of people. This idea that we are barreling toward a singularity and that we’re going to change everything is inherently foolish, immature and cruel. The better idea is trying to construct, however imperfectly, incremental paths that make things better.

Knowledge at Wharton: Even in a world that fairly compensates us for our data, how do we avoid self-commoditization? How can we differentiate our contributions when we will have so little control over them?

Lanier: What I propose in the book is a brutally mathematical system that I’m sure would under-represent reality, as such systems always do, but nonetheless would be pretty straightforward. It’s based on “what-if” calculations. The question is, if my bits had never existed, what would be the value differential for a particular cloud scheme? If I hadn’t provided a translated book from which a lot of examples were taken, what difference would it make to the market value of new cloud-based translations?

If an entrepreneur can’t approximate that kind of what-if calculation, then that means that the output of her cloud algorithm is kind of random or chaotic anyway, and she shouldn’t be making money from it. The digital economy should be set up in such a way that if you can’t calculate the value of corpus contributors, then you will not make money from your own scheme. To the degree that you can attribute value from contributors, that’s the measure of what you yourself should profit on, because the rest of your calculation is random and chaotic. In other words, the rest of the income you can generate is some sort of network-effect lock in rent, but not real value creation. Digital entrepreneurs should earn percentages of the wealth they help generate for others.

This might seem like a tricky concept, but it’s straightforward. When you make money from some sort of digital network scheme, there’s a mixture of lock-in or network effect versus the actual value you provide. If you want to ask what the difference is, it’s not that hard to determine, because if ordinary people have the mobility not to be locked in, then you can calculate what happens to your price when they leave. Eventually, digital infrastructure businesses would probably turn more conventional levels of business profit — 7% instead of 7,000% [laughs]. The lock-in effect is independent of the value of the corpus that you’re using to calculate whatever it is you do. So the value of individual contributors would cumulatively be the value of the corpus, and what you would do as a business is a value-add on that.

Without even using a regulator, this design for an information economy would ding you for lock-in rent taking, and the result would be a more productive society. The thing about digital networks is that they can both increase productivity and dramatically decrease it. The difference is whether people are paying money purely for blackmail network effects or for actual value-adds. Paying corpus contributors would provide an acid test for where that line is. This approach provides a way to reduce the role of regulators but still have a regulated digital economy.

Knowledge at Wharton: You envision a “middle-class–oriented” information economy, one in which information isn’t free, but is at least affordable. Where does the working class fit into such an economy?

Lanier: The key thing for me is whether the outcome of people interacting in a digital system is a power-law distribution or a bell curve distribution. That keeps it simple. What I mean when I talk about a middle class is having outcomes that more often look like bell curves than power laws. A power law is a high tower plus a long tail connected by an emaciated neck. So you have a few winners and everybody else is a wannabe. Instagram and American Idol are both like that. A bell curve is the result of a measurement of a population instead of a sorting.

There’s a fundamental dilemma that is intrinsic to the math of economies, which is to the degree that you give individuals in an economy self-determination, and you have different individual outcomes and people invent their own lives and so on, you’ll engender a spread of outcomes, where by any particular measure some people will be left behind. Different measures might find different subpopulations left behind. But you’ll have some kind of a distribution in which some people are at the bottom of the distribution.

What I want to point out is that every single way of thinking about a good society or a better society that I’m aware of depends similarly on a strong middle hump in a bell curve, whether we want to call that a middle class or not. If you are an Ayn Rand fan, you have to admit that you can’t have markets without customers. And the customers have to come from the middle or the market won’t sustain itself. Equally, if you’re a government person, you should also want a strong middle. Otherwise, income concentration will corrupt your democratic process, which I think is an issue in the U.S. right now. If you’re a society person, you have to have that strong middle or the society will break into castes, which has happened repeatedly in societies all over the world.

Everybody should want the same curve. As I point out in the book, different kinds of digital network designs will give you bell curves or power laws. If we choose network designs that give us bell curves, then we can have a sustainable digital society. That’s the core idea of the book.

But you were asking who gets left behind. In my view, if we have a true data society and an honest accounting of it, the amazing thing is that just by living, you’re contributing bits to the network. Even someone who is trying to be as unproductive and uninteresting as possible might actually be contributing at least some value to modern network schemes. Sometimes it seems that those people are more active online than other people who are busy doing real work. So it’s very possible that even people at the low end might still find at least a baseline of reasonable livelihood simply from being in a world that needs a lot of bits from people to calculate all kinds of cloud things.

A full-on digital economy might actually start to capture a little bit more value from people, so that even at the bad end of the bell curve you would see a baseline that’s not quite as horrible as abject poverty. Still, a distribution means a distribution, so there will still be people at the bottom. I don’t think you can have a society that has freedom without some mechanism to help whoever ends up at the bottom of the curve.

Knowledge at Wharton: At the heart of your thinking is the concept of “economic dignity.” Why is that so important to a healthy economy, and how does it differ from the goals of previous economic revolutions? After all, dignity was what the unions were fighting for a century ago.

Lanier: Dignity means that there are many participants in the economy functioning as first-class citizens, able to be both buyers and sellers, with enough mobility to actually have a choice. If you don’t achieve that, then you don’t really have a market economy.

I argue in the book that prior to the rise of digital networks, up until the late 20th century, the way the middle block happened — the way we got something resembling a bell curve in the distribution of outcomes in a society — was mostly through special ratcheting mechanisms, which I call levees in the book. These mechanisms all had a bit of an artificial feeling to them, like a taxi medallion or union membership or something similar. There was always some kind of a hump you have to get over that put you into a protected class so that you join a collective-bargaining position, whether that was explicitly so or not. College education and middle-class mortgages functioned as leaky levees, to a degree.

What digital networks have been doing is walloping all those positions, crashing them down, and that’s been the primary mechanism by which the Internet has been harming the middle classes.

Take the taxi medallion system, which in itself is far from perfect and in some cases completely corrupt, but nonetheless has been the route to the middle class, especially for generations of immigrants. There are now Internet companies like Uber that connect passengers directly with drivers of vehicles for hire. Anyone can spontaneously compete with taxi drivers. What this does is create a race to the bottom, because these companies are choosing a kind of efficiency that creates a power law where whoever runs the Uber computer does very well, but everybody else pushes down each other’s incomes. So although such attacks on levees create efficiency from a certain very short-term perspective, they do so by creating power-law outcomes that undermine the very customer base that can make the market possible. This idea eventually eats itself and self-destructs.

If we could have an organic way of building a bell-curve outcome instead of traditional ad hoc levee systems, we might get a similar result that’s broader, more honest, fairer, and also more durable and less corrupt. I could be wrong, but just looking at the math it does make sense. There’s still more to work out, but fundamentally there’s some potential there.

Knowledge at Wharton: If you were forced to make a prediction, who do you believe will own the future? Are you optimistic or pessimistic?

Lanier: I want to say something about this whole optimism thing, because sometimes people say, “Oh, after I read your book I was depressed.” I feel exactly the opposite. It’s the people who don’t see room for improvement that are the pessimists. The most pessimistic people in the world are the Panglossians, because those are the people who believe we’ve already achieved perfection. I know tons of those in the tech world. There are lots of people in the tech world who believe that what’s going on with the economy and digital technology is as good as it could possibly be, that we’re creating utopia. I don’t see that.

In fact, what has driven me to write the books I’ve written is that when you’re in the tech industry you’re constantly hearing this rhetoric about how we’re creating so much wellbeing and good in the world. When you actually look out there, you see that it’s true for tiny spikes of people who are doing well, but that overall, the middle classes are sinking, overall there are huge looming problems. I just can’t accept that disconnect. I think the Panglossian approach is the ultimate nihilism. It’s candy-coated nihilism maybe, but in a way it’s more deceptive than just outright nihilism.

Whereas the optimists are those who say, wow, I see problems, but I think I can start to see glimmers of solutions even though they are hard. Those are the real optimists.

I don’t know how long it will take to get from here to there. I don’t know if it will be this century or three centuries from now. I don’t know when it will be, but the math is simple: you have to have strong middle classes if you want to have individuals with agency interacting as the principle of your civilization. That math is unassailable. There’s no other way to have a sustainable civilization.