Soft Skills for the Augmented Adviser

Planners who are testing the waters of AI shouldn’t lose sight of what makes them human

Journal of Financial Planning: February 2024

 

Danielle Andrus is the editor of the Journal of Financial Planning. She can be reached at HERE.

 

One of the leading arguments for businesses to implement generative AI is to capture efficiencies in their processes. A 2023 survey by Fortune and Deloitte found 80 percent of CEOs agree this is a function of gen AI. However, only 37 percent said they were implementing the technology to some degree (Bechtel and Briggs 2024b). There’s a gap between what we think it can do and what we’re ready for it to do. Amid the enthusiasm and trepidation surrounding this technology, financial planners need to be open minded about the future, but also rededicate themselves to some of the softer skills that have been deprioritized in favor of quantitative and technical skills.

Erica Orange, executive vice president and chief operating officer of The Future Hunters, warns that generative AI will force leaders to double down on human-centric traits like imagination, intuition, and creativity.

“They are smart,” she said of AI platforms, “but we have to be intelligent.”

As AI is integrated into more processes, in personal and professional spheres, planners will find some rote tasks can be offloaded to make room for deeper, more thoughtful analysis.

“It’s not the end of work. It’s just the end of boring work,” Orange said in an interview. “It’s eliminating many of the aspects of work that perhaps, in a more existential way, humans weren’t meant to be doing in the first place.”

‘Major in Being Human’

“David Brooks from the New York Times I think said it best, and it has spurred a lot of my thinking,” Orange continued. “He said, ‘In an age of AI, major in being human.’” Planners have a leg up on AI because it “has no idea of history, context; has no sense of human-to-human relationship building. All of those skills are going to become much more important.”

Increasing focus on STEM in education has put too much emphasis on hard skills, Orange believes. Students were encouraged to pursue computer science or finance because that’s where the jobs were. “In a prior economy, we didn’t want [a population] of philosophers and ethicists and anthropologists—but now those types of thinkers are going to be much more in demand,” she said.

STEM jobs are still valuable opportunities, but AI is showing us that some balance is needed between a left-brain and right-brain style of thinking.

“We don’t have enough people who know how to code, but at the same time, we have even fewer people who understand the ethical second- and third-order consequences and what this is going to mean in a world of AI,” she said.

Now, she continued, “We need a full economic recalibration around rewarding people for a lot of the skills that were deprioritized.”

Job Requirement: Imagination

Deloitte made the case for imagination in its Tech Trends 2024 report (Bechtel and Briggs 2024a).

“With generative AI as a force multiplier for imagination, the future belongs to those who ask better questions and have more exciting ideas to amplify,” according to the organization’s chief futurist, Mike Bechtel.

In the report, he describes a demonstration of a generative AI platform that produced unique images based on text prompts. The executives participating in the demonstration each got what they asked for, but there was a striking difference in what they asked for. One asked for an image of a sunset and got a perfectly adequate picture. Another prompted the platform for an absurd illustration of warring Martian snack foods.

“I couldn’t help but quietly acknowledge the clever human with the magical mix of mind and moxie to even ask for such a thing,” Bechtel wrote.

Orange believes this imaginative sort of thinking is needed at all levels of a firm. She noted that some organizations that are experimenting with generative AI are designing roles for prompt engineers, but she suggests that’s a skill that everyone needs.

“I think we all have to become prompt engineers. Within the profession, how do we train people and bring in the new minds and thinkers who can ask the questions that get us to better outputs?”

Opportunities for New Thinking

Open mindedness is another skill that planners need to nurture in themselves and their teams. The changes wrought by artificial intelligence in other fields will ripple out to financial planning. Healthcare provides a clear example of how changes in one field impact others.

Gurpreet Dhaliwal, M.D., professor of medicine at the University of California San Francisco, sees a corollary between tech trends in healthcare and tech trends in financial planning. Medicine and financial planning are professions with significant domain expertise in heavily regulated, data-rich environments, he said. While managing those challenges, doctors and financial planners also have to serve the humans who trust them.

“When the patients’ lives are changed by technology, then everything from their goals to their time horizons, including their perceptions of information and risk, are all going to change,” he said.

Patients have long been told to be their own advocates when interacting with healthcare providers, and Dhaliwal noted that lately they’ve been empowered to take even more active roles in diagnoses and treatments. For example, wearable devices and genetic testing services can give patients more insight into trends in their health. Financial planners could conceivably find themselves in a position where their clients trust technology as much as or more than their financial professional, Dhaliwal noted.

He shared an example of a patient who wanted antibiotics for a sore throat. Dhaliwal tried to explain why they weren’t an appropriate treatment. The patient put his symptoms into ChatGPT, which suggested potential causes and treatments, none of which included antibiotics.

“The remarkable thing to me was that sometimes we worry about apps giving clients or patients misinformation, but I think this was a great example how they can partner with us,” Dhaliwal said.

Never More Critical Thinking

Orange cited a 2016 study by the Stanford History Education Group that succinctly described the digital-native generation’s ability to evaluate the veracity or reliability of online information as “bleak.”

Researchers presented middle school, high school, and college students with digital content such as web pages and social media posts and asked them to discern articles from advertisements and sponsored content. Three-quarters of students could correctly identify traditional articles and advertising, but “sponsored content” or “native advertising” confused 80 percent of students, even when it was labeled as sponsored.

“Never have we had so much information at our fingertips. Whether this bounty will make us smarter and better informed or more ignorant and narrow-minded will depend on our awareness of this problem and our educational response to it,” the researchers wrote.

Orange believes learning how to discern real and reliable information from more conflicted information like ads or propaganda is one of the biggest skill sets financial planners of the future need to learn. She described the online environment as a carnival funhouse with multiple mirrors distorting reality. Planners must learn how to parse multiple realities and teach the next generation to do the same.

“We don’t think enough about how those are translatable skills for the workplace or for our professions. We just think of it as coloring our personal lives,” she said, “but the ripple effects . . . of being able to discern between a lot of that and to mitigate the risks for clients is going to be really important.”

This is a challenge with immediate implications for firm leaders who need to develop a competent team and identify potential successors to follow them in leadership roles.

“If you bring a young person into the firm who has no idea if they’re looking at a deepfake or not, that also can expose you to a tremendous amount of risk,” Orange said.

Clients Crave ‘Analog Experience’

Advances in health technology help doctors get much more accurate information, Dhaliwal said, but it raises entirely new series of questions when they take that information to their patients.

“As AI tools give better projections, better prognostication, maybe foreseeing scenarios that the human can’t see—that’s all useful information, [but] it will induce as many questions as it does answers,” he said. “The patient or customer still needs an analog experience to work through uncertainty. Computers have not yet done a great job of helping humans manage that.”

However, he said, planners shouldn’t dismiss AI as devoid of emotive qualities. A study published in JAMA Internal Medicine found chatbot-generated responses to patient questions were preferred to those from doctors for being more empathetic (Ayers et al. 2023).

“I think we’re going to find we may have things to learn from chatbots,” Dhaliwal said.

Ethics and AI

Some firm leaders may be tempted to think they need to invest in AI-powered solutions just to keep up with their competitors. “You do, but you have to think of what problem it’s really solving and what that human-centered impact is going to look like,” Orange said.

She argues that companies need a CEEO: a chief ethics executive officer to help visualize the impact of advanced technology investments on stakeholders. “What is the impact that it’s going to have on talent and talent management? What is it going to do to upskilling and reskilling? How is it going to change the HR function? What is it going to do to risk and reputation?”

An as yet unsolved problem with AI models is bias in the data, which can be introduced in the training data, the algorithm itself, or in predictions, according to IBM (2018). The organization developed an open-source project, AI Fairness 360, to help developers identify and mitigate unintended biases. Microsoft’s Fairlearn open-source toolkit aims to manage “trade-offs between fairness and model performance” (Bird et al. 2020).

Orange is also concerned about “data inbreeding,” when synthetic data (data generated by machines rather than real-world events) is used to train AI models. This can break models as they degrade over multiple iterations (Shumailov, Shumaylov, Zhao, and Gal et al. 2023).

Dhaliwal wonders about liability associated with using or ignoring advice provided by AI if it leads to unintended or undesirable consequences.

“If the AI gives advice to a financial planner and then the professional doesn’t follow it, where is the responsibility if it’s a bad outcome? Is the financial planner on the hook for that advice if they follow the AI; are they on the hook if they didn’t follow the AI and the AI statistically was a better path?”

Conclusion

While ChatGPT spurred a lot of discussion about generative AI last year, chances are you’ve been using AI tools for some time. Voice assistants like Siri and Alexa, productivity tools like Otter.ai, or AI functions embedded in software like Salesforce and MailChimp are familiar tools that many of us use every day.

As AI technology evolves beyond these familiar tools, planners need to start thinking about how their work will look different in the future. Much like displacement fears in the early days of robo-advisers, disruptive technology is not necessarily destructive, but it does require planners to embrace new ways of thinking about how they serve their clients and what it is they are providing them. A planner is more than a platform where clients can upload some numbers and get an answer to a question. A planner is someone who helps clients envision a future and draw a map to get there. 

References

Ayers, John W., Adam Poliak, Mark Dredze, Eric C. Leas, Zechariah Zhu, Jessica B. Kelley, Dennis J. Faix, Aaron M. Goodman, Christopher A. Longhurst, Michael Hogarth, and Davey M. Smith. 2023, April 28. “Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.” JAMA Intern Med. 183 (6): 589–596. doi:10.1001/jamainternmed.2023.1838.

Bird, Sarah, Miroslav Dudík, Richard Edgar, Brandon Horn, Roman Lutz, Vanessa Milan, Mehrnoosh Sameki, Hanna Wallach, and Kathleen Walker. 2020, September 22. “Fairlearn: A Toolkit for Assessing and Improving Fairness in AI.” Microsoft. www.microsoft.com/en-us/research/uploads/prod/2020/05/Fairlearn_WhitePaper-2020-09-22.pdf.

Bechtel, Mike, and Bill Briggs. 2024a. “Generative AI: Force Multiplier for Human Ambitions.” Deloitte Insights. https://www2.deloitte.com/us/en/insights/focus/tech-trends.html#read-the-introduction.

Bechtel, Mike, and Bill Briggs. 2024b. “Genie Out of the Bottle: Generative AI as Growth Catalyst.” Deloitte Insights. www2.deloitte.com/us/en/insights/focus/tech-trends.html#genie-out-of-bottle. IBM. 2018, November 14. “AI Fairness 360.” www.ibm.com/opensource/open/projects/ai-fairness-360/.

Shumailov, Ilia, Zakhar Shumaylov, Yiren Zhao, Yarin Gal, Nicolas Papernot, and Ross Anderson. 2023, May 31. “The Curse of Recursion: Training on Generated Data Makes Models Forget.” https://doi.org/10.48550/arXiv.2305.17493.

Stanford History Education Group. 2016, November 22. “Evaluating Information: The Cornerstone of Civic Online Reasoning.” https://stacks.stanford.edu/file/druid:fv751yt5934/SHEG%20Evaluating%20Information%20Online.pdf.

 

Read More: Read more about how AI adoption in health and medicine will affect the planning you do for your clients in Chris Heye’s column, “What AI Means for Clients’ Health,” in this issue of the Journal of Financial Planning.

 

 

Topic
Ethics
FinTech