This week, Facebook CEO Mark Zuckerberg endured hours of grilling from two Congressional committees to answer questions on why his company did not do a better job protecting the privacy of user data from unauthorized access by Cambridge Analytica, a consulting and data analytics company.

In the televised, two-day testimony, viewers saw a grim-faced and often apologetic Zuckerberg being peppered by questions from lawmakers, many of whom struggled with technical terms or did not seem to know how aspects of Facebook worked.

Sen. Orrin Hatch (R-Utah), for example, asked Zuckerberg, “How do you sustain a business model in which users don’t pay for your services?” With a straight face, the Facebook CEO said, “Senator, we run ads.” Social media had a field day lampooning members of Congress with cheeky memes and YouTube video clips.

So, what would have been some better lines of questioning than those pursued by the Congressmen? Here are some questions that Wharton professors say they would have asked Zuckerberg in that situation.

Jonah Berger, professor of marketing:

  • How can we use social media to improve well being, both of people themselves and our society more broadly?

Eric Bradlow, chair of the Wharton marketing department and director of the Wharton Customer Analytics Initiative:

  • Does Facebook provide differential access to customer-level data depending on the organization using the information? Does Facebook always know who that end user is?
  • What ability does Facebook have to scrape information from ads and posted feeds on its platform in order to assess the potential of fraudulent information, hacking in a large-scale way?
  • How does Facebook balance open access and its hyper-targeted advertising profit motive with customer privacy?

Eric Clemons, professor of operations, information and decisions:

  • What percentage of your revenues come from ‘paid likes’ in which a user is paid by a seller to ‘like’ a product for his or her friends?
  • You benefited financially from Cambridge Analytica’s clients’ targeting of fake news and inflammatory posts. Why did you wait years to report what Cambridge Analytica was doing?
  • Are you a publisher, with first amendment rights to express your own opinions? Or are you a platform, with no obligation to oversee content?

Peter Fader, professor of marketing:

  • You mentioned a possible premium-service subscription model in Facebook’s future. This could overcome many of the issues that have created the need for these hearings, and provide other benefits for Facebook’s users and shareholders. Why hasn’t this been a higher priority?

Kartik Hosanagar, professor of operations, information and decisions:

  • How can users better understand what Facebook knows about them? What tools has Facebook built to allow users to understand the behavioral profiles Facebook has about them?

“You mentioned a possible premium-service subscription model in Facebook’s future. … Why hasn’t this been a higher priority?” –Peter Fader

  • What has Facebook done to audit its privacy, security and related practices? Given Facebook acknowledges that they don’t own user data, why aren’t the audit reports public?
  • Who owns the data created on Facebook? What rights do users have in deleting it or controlling who gets access to their data?
  • What is a suitable fine for Facebook to pay when it violates its own agreements?

Eric Orts, professor of legal studies and business ethics:

  • Our national intelligence services have confirmed that the Russians (and perhaps other foreign powers who may emulate them) are likely to use ‘active measures’ to affect the midterm elections, as they did the presidential election in 2016. What are you now actually doing to prevent this from happening again?
  • Will you invite users to report potential ‘fake news’ or other activities that may appear to be organized by foreign intelligence services? How are you going to detect and stop this sort of interference in our democratic process from ever happening again?
  • You sell personal data for advertising with an incentive to please the businesses that advertise on your platform. Why should we not replace Facebook with either a fee-based Alt-Facebook (where users pay a small annual fee) or a nonprofit PBS-Facebook (supported by the government and charitable donors)? Why should relevant algorithms not be primarily designed for the users rather than for the advertisers and data-harvesters?

“How are you going to detect and stop [foreign] interference in our democratic process from ever happening again?” –Eric Orts

Kent Smetters, professor of business economics and public policy:

  • Would you be willing to unilaterally implement the “right to be forgotten” policy that exists in the European Union and Argentina once people close their accounts?
  • Would you support making privacy settings more obvious on the Facebook website, especially ones that allow your friends to share your information with third parties? And would you also disclose to users the potential impact of this sharing?

Kevin Werbach, professor of legal studies and business ethics:

  • Dominant platforms for communication have long been regulated in America and throughout the world to promote societal values and foster a more robust competitive market. Why should Facebook, with its current scale and influence, be treated any differently?
  • Innovative companies such as Facebook were able to develop because the networks they operate on are open. Today, Facebook is the dominant network infrastructure for online identity. Shouldn’t it provide open application programming interfaces (APIs) to allow the next generation of innovators to develop?
  • With far more Americans using Facebook now as a primary means of communication than landline telephones, why should privacy regulations be more lax on online platforms?
  • Are Americans entitled to control what’s done with their information online? What could Facebook do to provide a truly meaningful opportunity to exercise that control, especially when it involves third parties?

What is Facebook doing to remove bias from its algorithms?” –Pinar Yildirim

  • When you suggest that artificial intelligence will solve content moderation challenges in a few years, what about all the examples of machine learning systems that are biased, manipulative, or exclusionary to certain groups, even when not intended?
  • Time and time again you’ve apologized for privacy violations and pledged to do better. Isn’t it time to recognize that when your business model is built on relentlessly promoting engagement and generating advertising revenues, the incentives to sacrifice privacy and other social values will inevitably be too strong?

Pinar Yildirim, professor of marketing:

  • The data that Facebook has collected over the years have tremendous potential to help scientists understand human behavior and dynamics. Given that the originator of the leak was also a scientist and privacy regulations may tighten access to data by third parties, how will you ensure that Facebook data can be utilized to advance science?
  • So much of the future of Facebook relies on artificial intelligence and machine algorithms that are replacing active human involvement. While they provide immense benefits, it is also feared that algorithms make biases in human judgement systematic and more widespread. What is Facebook doing to remove bias from its algorithms?