Navigating the Ethical Frontier: AI in Financial Planning

The opportunities brought about by AI must be balanced with a purposeful and ongoing approach to ethics

Journal of Financial Planning: April 2024

 

Dani Fava is group head of product innovation at Envestnet, which is leading the growth of wealth managers and integrated advice through connected technology, advanced insights, a comprehensive range of solutions, and industry leading service and support.

 

Artificial intelligence (AI) has already transformed various functions and industries including administration, inventory, research, healthcare, transportation, and consumer electronics. Because of its widespread potential, it’s not surprising that we’re starting to see glimmers of AI in the financial planning and financial advice sector. After all, what is financial planning and advice if not predictive analytics based on hoards of underlying data. Some experts think that AI holds the potential for revolutionizing financial advice, making it more accessible, affordable, accurate, and efficient. However, just as other implementations of the technology have shown flaws that serve as stark reminders of bias, we’ll grapple with the same concerns and potential biases in financial planning.

In a revealing moment that underscored the deep-seated biases within artificial intelligence systems, a major tech company’s facial recognition software failed in a profound way. This technology, designed to categorize images, mistakenly identified Black individuals as gorillas. This incident not only highlighted the glaring oversight in the AI’s training data, which was predominantly composed of White faces, but also raised a critical question about the ethical implications of AI development.

The fallout was immediate and widespread, sparking a conversation about the need for diversity and inclusivity in AI training sets. Tech companies were called to account, prompting them to re-evaluate their data collection and algorithm development processes. This event served as a sobering reminder that technology, in its quest to mirror human intelligence, also risks replicating human biases. It underscored the importance of embedding ethical considerations into AI development, ensuring that technology advances with fairness and respect for all.

One could imagine a scenario in financial planning akin to this one. Just as the facial recognition software was flawed and inherently biased because its training data was heavily populated with White faces, the data being used to train financial planning and advice modules could be filled with wealthy, White families. After all, according to the Federal Reserve’s 2019 Survey of Consumer Finances, the median net worth of White families was $188,200, significantly higher than that of Black families at $24,100 and Hispanic families at $36,100. This stark disparity illustrates the profound racial wealth gap, with White families holding nearly eight times the wealth of Black families and over five times that of Hispanic families.

If you can avoid thinking about the potential biases, it’s easy to get excited about the technological possibilities that AI can bring to the forefront of our industry. AI can streamline financial planning processes, offering personalized, efficient, and scalable solutions. We’ve already seen the power of scalable advice delivery by automated investment platforms (robo-advisers); in 2023, robo-advisers managed almost $3 trillion world-wide (Timoshenko 2023).

With sheer automation, the benefits of these technologies have brought investors lower costs, increased accessibility, and the ability to handle complex data analysis. Now, we layer AI on top of that and we can see a future where the everyday American has unprecedented access to personalized financial advice and strategies previously available only to the wealthy. This democratization of financial services could lead to more informed decision-making, helping individuals maximize their savings, invest wisely, and plan for retirement more effectively. Moreover, AI’s potential to streamline and automate tax filing processes could alleviate the stress and complexity of tax season, ensuring that every American can optimize their financial obligations with ease and confidence. Imagine your own college tuition planning concierge at your fingertips, potentially for free.

But, when the thoughts of bias creep in—that we’ve already seen profound evidence of—the glowing future of democratized access to financial wellness begins to devolve. There is an ethical risk of AI systems making generalized recommendations that do not account for the unique needs of individuals. For example, a general recommendation for a child would be a 529 plan. But, let’s say that child is diagnosed with a severe form of non-verbal autism or other educationally debilitating condition, where traditional higher education paths may not be the most appropriate or beneficial option. In such cases, a more suitable recommendation might be an ABLE account, which offers tax-advantaged savings for individuals with disabilities without affecting their eligibility for public benefits. This example underscores the importance of incorporating a nuanced understanding of individual circumstances into AI-driven financial advice. Without the capacity to recognize and adapt to such specific needs, AI could inadvertently guide families toward financial decisions that are not in their best interest, highlighting the critical need for ethical considerations and human oversight in the development and application of AI in financial planning.

To find real scenarios of biased AI creeping closer to financial services, we need to look no further than the credit scoring market. Research has demonstrated that AI credit scoring models can inadvertently perpetuate and even exacerbate biases against underrepresented groups. A study highlighted by Stanford’s Graduate School of Business revealed that credit scoring models are between 5 to 10 percent less accurate for lower-income and minority borrowers compared to their higher-income and non-minority counterparts, regardless of their payment track record. This discrepancy is largely due to the “flawed” nature of the underlying data, which often does not accurately predict creditworthiness for these groups. The study points out that limited credit histories or “thin” credit files can disproportionately lower scores for individuals within these demographics, as traditional scoring models fail to account for the nuances in their financial situations.

Bringing it a step further, if AI were to recommend a path that wasn’t in the family’s best interest, determining liability becomes complex and multifaceted. Liability could potentially fall on several parties, or on no one at all, depending on the context and legal frameworks in place.

Financial advisers or institutions that rely on such AI technology for providing advice might also face liability. If they failed to perform due diligence in ensuring the AI’s recommendations were suitable for their clients’ specific circumstances, they could be seen as neglecting their fiduciary duty. Could we see a future where relying on these tools becomes so second nature that we skip over the due diligence process, similar to how most of us skip the terms and conditions when signing electronic agreements? We’ve already put our trust in these tools in an unprecedented way. Forty percent of full-time employees are already using AI at work, and 75 percent are finding it effective. And this is only a short time since it’s become broadly available (Iskiev and Forsey 2023).

Because of this evidence, and because we know the underlying financial planning data that would be used to train a model is heavily tilted toward a homogenous group of people, careful steps need to be taken to ensure the fair and ethical proliferation of AI within financial services. The obvious idea is to improve diversity in training data, although that seems like an uphill battle that won’t be won before AI takes hold. At the very least, our industry needs to implement transparent AI models where decisions can be explained and should ensure ongoing oversight. The oversight can be technical in nature, running outcomes from one AI system through another AI system specifically trained to spot inequities.

We also need to look to the industry regulators to fully understand AI and to leap forward to certifying AI in a rigorous, consistent manner. AI used in financial services should be put to the test daily, ensuring it gives sound outcomes that are considerate of every possible nuance. Human oversight isn’t the only answer. Human oversight is responsible for the creation of the biased data set. We need ethical guidelines and regulatory frameworks tailored to AI in financial planning, ensuring that AI serves to enhance, not undermine, personalized financial advice. Outlining and applying guidelines could help achieve a harmonious balance between leveraging AI’s potential and safeguarding against its ethical pitfalls. A collaborative approach among technologists, ethicists, and financial professionals is required to navigate these challenges, ensuring AI tools are used responsibly and inclusively to truly benefit clients

References

Iskiev, Maxwell, and Caroline Forsey. 2023, August 8. “The State of Consumer Trends in 2023.” HubSpot. https://blog.hubspot.com/marketing/state-of-consumer-trends-report.

Timoshenko, Anastasia. 2023, August 8. “Exploring Robo-Advisors: A Popular Trend In FinTech 2023.” Elinext. www.elinext.com/industries/financial/trends/financial-robo-advisors/.

 

Topic
FinTech