The following op-ed was written by Sarah Hammer, an executive director leading financial technology initiatives at the Wharton School and adjunct professor at the University of Pennsylvania Carey Law School. It was originally published on MarketWatch.

The financial services industry has the opportunity to use new technology that could revolutionize financial advice: generative AI.

Traditionally, financial advice has been crafted by human advisers through data gathering, goal setting, and personalized analysis, often resulting in long-term client relationships. Now, generative AI tools promise to handle everything from financial planning to retirement savings, becoming smarter over time by integrating personal data, user preferences, and extensive economic insights.

These tools, used either by investors directly or in partnership with advisers, can enhance predictions, optimize efficiency, and improve client communication. Yet as the industry considers adopting AI-generated financial advice, it must carefully weigh the implications.

Generative AI is expensive. And while Alphabet CEO Sundar Pichai argues that the risk of underinvesting in AI far outweighs the risk of overinvesting, not all investors are convinced. The recent fall in technology stocks demonstrates the pressure on AI-related companies to prove greater revenues, reduced costs, and increased productivity from AI spending. Businesses must evaluate and project return on investment for AI initiatives, ensuring that AI investments translate into value for both the business and its clients.

As for AI safety, hallucinations remain a fundamental problem for generative AI. AI hallucinations occur when generative AI tools confidently deliver false or misleading information. In other words, not all generative AI financial advice gets smarter over time. Some tools will produce minor inaccuracies, such as misstating an inconsequential historical fact. Others will result in seriously misleading information, such as recommending an undue amount of risk in a retiree’s investment portfolio. Consider this cautionary tale from the New York court case, Mata v. Avianca: an attorney used ChatGPT to conduct legal research. The judge found that the attorney’s work product contained bogus internal citations and quotes, leading to severe legal and disciplinary consequences.

Another weighty issue in AI-generated advice is bias. A 2023 study of more than 5,000 images generated using Stable Diffusion concluded it amplified gender and racial stereotypes. Because generative AI models must inherently train on a set of data, there is the risk that bias in the training data will lead to bias in the model’s outputs. More specifically, the advice offered by the AI model will discriminate in its treatment of individuals or groups.

In the context of financial advice, an example would be if the advice did not recommend attractive and appropriate investment products to a historically disadvantaged population, perpetuating that population’s position. This problem is akin to a similar challenge in retail lending. Historical training data in credit models can lead to repeated denial of credit to people who do have the ability to repay and who need it most.

“The potential benefits of AI in financial advice are too great to ignore.”

Despite these challenges, the potential benefits of AI in financial advice are too great to ignore. There are steps that both individuals and advisers can take to navigate this new landscape responsibly.

First, address the issue of cost versus value. Individual investors should consider whether AI-based advice can provide value beyond existing online tools or your human financial adviser. For advisers, project the cost reduction and added benefit of using a generative AI tool. Prioritize your AI efforts, focusing on those that promise the most significant and tangible benefit.

Second, ensure there’s a human involved. Individuals should never mindlessly accept the recommendations from AI-based financial advice. For instance, ChatGPT can elaborate on financial concepts, but it’s not currently suitable for offering financial advice. Hallucinations are real, and investors should be vigilant about them. Advisers using AI must ask questions about how the advice model works, keep an eye out for problems, and continue educating themselves about how generative AI is developing.

Third, for both individual investors and advisers, check whether your advice application has guardrails in place to address bias. While no easy solution will eliminate bias in AI-based advice, strong AI governance, data governance, and proper regulatory compliance can help.

AI governance ensures that the application or organization has a set of policies to guide the development of its AI, such as methods to assess whether bias exists, ethics and privacy practices, and procedures to maintain the integrity of the information presented by the model. Data governance can improve the data’s accuracy and consistency, control who has access to data, and work to optimize the data and system architecture to facilitate accessibility.

Finally, if advisers use an AI-based model to enhance their practice, they ought to confirm whether the application is appropriately licensed, registered, and compliant. Consider the requirements for robo-advisors. Although the services they provide are automated, robo-advisors must comply with the securities laws applicable to SEC or state-registered investment advisors. If a robo-advisor is registered as an investment advisor with the SEC, it is subject to both the substantive requirements and the fiduciary obligations of the 1940 Investment Advisers Act. The SEC has proposed similar rules for AI-based advice models and users, including rules requiring broker-dealers and investment advisers to address conflicts of interest associated with using predictive data analytics.

It’s clear that AI-powered financial advice is not just a technological novelty but a powerful force in the investment industry. Implementing robust AI and data governance, ensuring human oversight, and adhering to regulatory compliance can help mitigate the risks of AI-driven advice while harnessing its full potential. By blending algorithmic prowess with human insight and oversight, we can unlock the potential of AI-driven financial advice and forge a path forward for successful and responsible adoption.