Beyond Privacy: Three Ethics Challenges Advisers Will Confront in the Next Era of AI

Emerging AI technologies are reshaping adviser responsibilities, exposing fresh challenges around transparency, conflicts of interest, and maintaining sound professional judgment

Journal of Financial Planning: April 2026

 

Audio file
NOTE: Please be aware that the audio version, created with Amazon Polly, may contain mispronunciations.

 

Azish Filabi, J.D., M.A., is the managing director of The American College Maguire Center for Ethics in Financial Services (www.theamericancollege.edu), where she also serves as the Charles Lamont Post Chair of Business Ethics. The Center convenes a Responsible AI Executive Forum, providing a dedicated space for leaders to develop a shared understanding and AI fluency—strengthening institutional judgment, governance confidence, and ethical leadership.

 

Artificial intelligence (AI) in financial services is entering a new era of ethics by design. Workflows, products, and service offerings increasingly bring automation and ease, while regulatory guidance remains technology-agnostic, affirming the centrality of client best interest and fiduciary practices irrespective of which tools are integrated into the business.

Financial professionals must navigate profound design factors while simultaneously confronting emerging gray areas in practice. The imminent issues concern systems where (1) better AI performance demands more personal data than clients realize they are sharing; (2) the capabilities that make AI systems powerful also obscure conflicts of interest; and (3) sophisticated algorithms quietly erode professional judgment while generating an illusion of certainty.

When it comes to trust in financial services, clients indicate that keeping their data safe and secure is their number-one expectation, according to research published by The American College Maguire Center for Ethics in Financial Services.1 Yet meeting minimum client expectations is the starting point for managing AI, not the finish line; considerations of AI ethics can’t begin and end with data security and client confidentiality.

The next phase of integration is a pressure test for whether AI systems will elevate the financial profession toward client best interest or undermine hard-earned trust. The ethical questions won’t be resolved in compliance checklists or data security protocols. Thinking outside the box with respect to ethical duties has never been more important; your clients depend on it!

The Privacy Paradox Requires Advisers to Promote AI Literacy

People indicate that they value privacy yet regularly provide their personal data to companies in order to get access to goods and services. This phenomenon is called the privacy paradox, referring to the willingness to exchange personal information for access to digital services, even when people say they highly value their privacy.2 As digital-first services become embedded in daily lives, navigating privacy is increasingly challenged for both consumers and companies.

Various psychological and practical factors fuel these dynamics. People may be more comfortable sharing information with computer systems rather than other humans, for instance. Clients may be willing to put information into a free budgeting app that subsequently sells their data but still hesitate to discuss their debt with a human adviser. People may also underestimate the effects of privacy violations or may anticipate they have more secure legal rights than in fact exist.

Financial professionals will confront tradeoffs for their clients because the reality of AI systems is that more data generates more personalized outcomes. According to the AI and Trust Index Study, published by The American College Maguire Center for Ethics in Financial Services, consumers have a nuanced view of AI-enabled tools. The study finds that “currently, more consumers trust rather than distrust financial tools that utilize AI, but more consumers are unlikely rather than likely to use them. This gap suggests that in addition to trust, other factors drive the consumer’s intention to use AI tools.”3

Perceived security of their information is a key factor, as well as the consumer’s anxiety about interacting with a human adviser. Anxiety was particularly relevant when trusting agentic AI systems, but not in a straightforward way. The data suggests that some may find it easier to rely on automated systems, helping them overcome the fear of talking about their financial situation.

A client’s nuanced views of trust mandate that advisers promote AI literacy. Financial professionals need to be proficient in AI tools—not just how to use them, but also how they could violate client expectations. As advocates for clients, one role is to help educate on how and where client data may be used in the financial ecosystem, sometimes to surprising results.

Furthermore, advisers will need to navigate AI-enabled tools in their own workflows, particularly where client data is used to provide value-add products or services. This reveals a core tension in balancing duties of confidentiality and transparency to clients. For instance, some technologies scan social media to provide personalized products matched to real time events in a client’s life, which may go beyond the data minimization expectations.

To serve as trusted advisers, professionals must ask vendors and firms the relevant questions about AI systems, particularly in the absence of specific laws and regulations. Understand at a minimum how AI systems use and source data, what use cases are appropriate for AI integration, and the opportunities clients will have to remediate mistakes or poor results from data processing.

Advanced AI Systems Can Obscure Conflicts of Interest

There is much interest in the financial sector to further integrate AI into products and services, yet according to reports from FINRA and other regulators, firms in the securities industry primarily test generative AI tools presently for internal use (summarizing information across firm data, analysis of employee policies) and limited external communications with clients via chatbots.4

Despite their nascency, the potential for conflicts of interest in emerging technologies is a topic of concern. One reason is because advanced AI systems can be “black box” systems, making it difficult for users and even developers to interpret how they reach their predictions or recommendations. If these systems steer consumers toward recommendations or products, they risk violations of the duty of loyalty to serve clients’ best interests.

Advanced AI systems often rely on “deep learning” techniques, a term used to describe one common data processing approach. Deep learning enables computers to use existing data to learn from statistical trends therein, identifying patterns through artificial “neural networks” by applying a series of algebraic computations on their input. Large language models (LLMs) popularized through ChatGPT are one form of deep neural network model.

Deep learning methods are popular because they enable systems to learn from data without relying on a human to validate their outputs. This powerful technology can create efficiencies beyond the information processing limits we have as humans. Despite their prevalence, there is a critical shortcoming, which is that these advanced AI systems are inscrutable black boxes. “Contemporary models have hundreds of billions of weights—the numbers internal to a neural network that it uses to generate outputs or prediction . . . [thus] the user cannot peer into the black box to understand how the model computed its output from the user’s input (e.g., how a chatbot computed its text response from the user’s text prompt),” my coauthors and I write in the Journal of AI Policy and Complex Systems.5

This inability to interpret system behaviors can be problematic in financial services, where accuracy is paramount and consumers have rights to information about product approvals or pricing decisions. So much so that the European Union’s AI Act has identified financial services such as algorithmic credit scoring and pricing for insurance as a “high risk.”6 Similarly in the United States, the Colorado AI Act identifies financial, lending, and insurance services as high-risk AI systems because they affect consequential decisions for consumers.7

These tools range from systems that run in the background of your daily workflow to help summarize notes and conversations, to more active ones that support decisions by helping anticipate portfolio performance based on market conditions.8 Even with highly automated systems, developers encourage a “human in the loop” for decision-making purposes. Yet human oversight is only effective with proper training of the risks or ethical shortcomings of the technologies.

Advisers should scrutinize the tools designed to support them. One question to ask is about the data the models were trained on. While a technology vendor may not be willing to share the sources of data, which they may deem proprietary, advisers should obtain enough information to be satisfied that the data is representative of the scenarios relating to the client in question. Indeed, identifying common data transparency protocols should be a topic for professional associations, supporting members’ ability to access information to support clients’ best interest.

Consider, for example, an AI tool whose algorithm can help forecast the performance of a portfolio by analyzing market conditions. Advisers will want to understand the timeframes of the historical data used as an input for the software (Does it go back to the 1800s? What is the product mix?) to determine if it relates to their client’s circumstances. Furthermore, it will be critical to know whether the historical data includes fees. Given that the portfolio being offered to clients will include fees, determining how they factor into automated decisions is paramount to meeting expectations for a client’s portfolio payout.9

A complex yet related question professionals should ask technology developers is about the optimization functions of the algorithm. AI systems are designed to optimize specific outputs. For example, a portfolio management system that provides product recommendations may on the one hand optimize for a firm’s revenue through fees, or, on the other hand, for the client’s risk-adjusted investment returns. Understanding what weight is given to revenue optimization versus client return optimization is critical. An algorithm designed to optimize for a low-cost index fund with 0.03 percent expense ratio versus one optimizing for a proprietary managed fund with a 0.75 percent expense ratio will have clear implications for clients. Asking about the system optimization features can help you navigate this complex area.

There are few regulatory enforcement actions to date that can help put these emerging topics into perspective, but a 2021 SEC action demonstrates why AI literacy is important with respect to conflicts of interest. In that case, a robo-advisory firm allocated assets to proprietary ETFs owned by its parent company, moving client funds out of third-party ETFs into proprietary products, while triggering capital gain tax consequences that were not in the client’s best interest.10 The platform moved funds into the proprietary ETFs to support the parent company. While in this case the firm’s investment committee exercised human-in-the-loop oversight in the decision to move client funds, it demonstrates how a conflict of interest can occur in technology-driven processes, putting professionals on notice of the complexity of interactions with AI systems in high-stakes decisions.

Develop Safeguards to Confront Harmful AI Overreliance

The “hallucination” problem of LLMs is widely recognized among practitioners today. The CFP Board’s AI ethics guidelines requires that professionals “account for AI limitations and risks, including inaccuracies or hallucinations” when providing advice. Furthermore, they emphasize that “professionals are responsible for the final work product generated” by AI systems.11

The guidelines highlight the unique nature of generative AI systems: they can have the illusion of accuracy even when the system does not have a correct answer when prompted. This is because generative AI systems are probabilistic, providing different answers based on small changes in the request, or backend updates and changes in the models, as well as the actual data used to train the model. These design features can lead to misinformation and mistakes for people who rely on them.

The hallucination issue is particularly problematic considering research from Microsoft that shows that people tend to overly rely on AI systems, even when they are aware that they make mistakes.12 Researchers studied behavioral patterns when people collaboratively work with AI systems, called human–AI teaming. They focused on several scenarios, such as when people are likely to accept an incorrect recommendation made by an AI system (among all incorrect recommendations), the frequency of people changing their answers to match AI-generated outputs, and the extent to which they change their answers to align with AI system responses.

Alarmingly, among the instances of high overreliance are those when people are familiar with the task for which they are using AI.13 This is troubling because human-in-the-loop oversight requires that people have the ability to evaluate the quality of the system outputs, yet the findings show that competence in a task may make people overconfident about their ability to manage the incorrect system outputs. Overconfidence bias is a documented heuristic in decision-making, rendering those who are competent in a topic or skill excessively optimistic about their ability to navigate the task.14

This raises ethical concerns not only because errors can directly harm clients, but also because blind spots in decision-making are a persistent challenge in business ethics.15 When individuals feel confident in how a situation will unfold, they may discount warnings raised by others—or fail to pause long enough to consider the potential negative consequences of their decisions.

One safeguard against overconfidence bias in this domain is to increase AI literacy. When literacy is low, professionals are more likely to select recommendations from an AI system, research among medical professionals has shown.16 By understanding what data processing approaches the software is using, and, ideally, its data sources, advisers can be better equipped to make informed choices about AI outputs.

Another way to protect your practice is to discern when it’s appropriate to use a generative AI system, and when to avoid it. A recent New York court case provides a good starting point. An adviser hired as an expert witness used a generative AI system for support on his financial calculations relating to an estates dispute. He disclosed his AI use, and arguably used it prudently, only to validate his calculations by using the system to check his work. The court, however, denied the report into evidence in part because they could not recreate the same answer using the system, and the adviser could not explain what sources the AI system relied upon to reach its conclusion.17

In short, generative AI works differently from a calculator, even though it can assist in calculations. If you plan to use a generative system, consider running your own experiments, testing the limits of accuracy and trying out multiple systems for comparison. Generative AI is known to be bad at math; experiencing those limitations for yourself can help you understand when it’s inappropriate to rely on the system.18 Recognizing that AI is more like a smart colleague who can help you brainstorm, rather than an authoritative source of information, can be instrumental in combating overreliance.

Navigating a Path Toward Ethics in the Next Phase of AI Integration

The future of AI integration comes with promises of agentic systems that seamlessly support an adviser’s workflow, and digital technologies that help provide just-in-time personalized products addressing critical client needs. As speed-to-market pressures drive some business practices, financial professionals are navigating AI deployments at a time when specific regulatory guidance is still under development.

Nevertheless, financial regulators have made clear that the existing rules are technology neutral, applying to firms and professionals irrespective of what type of technology is used.19 With increased sophistication comes new ethical challenges that advisers will confront in this next phase of AI integration.

Clients and regulators alike expect that fiduciary practices continue despite increasingly complex emerging market practices. Navigating a path toward ethical behavior in the next phase of AI integration can only happen with an expanded lens on ethics that promotes system design as integral to client best interest. 

Endnotes

  1. The State of Trust in Financial Services. 2021. The American College Maguire Center for Ethics in Financial Services. https://insights.theamericancollege.edu/ethic-trust-study-2022/
  2. John, Leslie K. 2015, October 16. “We Say We Want Privacy Online, But Our Actions Say Otherwise.” Harvard Business Review. https://hbr.org/2015/10/we-say-we-want-privacy-online-but-our-actions-say-otherwise.
  3. Pattit, J.M. Forthcoming 2026. “Trust and the Client’s Intent to Use Financial Tools that Utilize AI.” The American College Cary M. Maguire Center for Ethics in Financial Services. www.theamericancollege.edu/centers-of-excellence/center-for-ethics-in-financial-services/news-research.
  4. FINRA. 2025. FINRA Annual Regulatory Oversight Report. www.finra.org/rules-guidance/guidance/reports/2025-finra-annual-regulatory-oversight-report/third-party-risk#_ai-trends; external chatbot examples: FINRA. 2020. “AI Applications in the Securities Industry.” www.finra.org/rules-guidance/key-topics/fintech/report/artificial-intelligence-in-the-securities-industry/ai-apps-in-the-industry.
  5. Filabi, Azish, Nick Masi, Ellie Pavlick, and A. R. Picone. 2024, Winter. “Adaptable Artificial Intelligence.” Journal on AI Policy and Complex Systems 9 (1). www.policyjournal.net/adaptable-artificial-intelligence.html.
  6. E.U. Aritifical Intelligence Act. n.d. “Annex III: High-Risk AI Systems Referred to in Article 6(2).” https://artificialintelligenceact.eu/annex/3/
  7. Colorado Senate Bill 24-205. https://leg.colorado.gov/bill_files/47770/download
  8. Filabi, A., S. Duffy, and S. Parrish. 2026. “AI for Financial Advice: Mitigating Conflicts of Interest When Steering Consumers.” Journal of Financial Regulation and Compliance. https://doi.org/10.1108/JFRC-06-2025-0151
  9. Ibid.
  10. United States of America Before the Securities and Exchange Commission, Administrative Proceeding File No. 3-20466. www.sec.gov/files/litigation/admin/2021/ia-5826.pdf
  11. CFP Board of Standards. n.d. “Generative AI Ethics Guide: A Checklist for Upholding the Code and Standards.” www.cfp.net/-/media/files/cfp-board/standards-and-ethics/compliance-resources/cfp-board-ethics-and-generative-ai-checklist.pdf.
  12. Passi, Samir, and Mihaela Vorvoreanu. n.d. “Overreliance on AI: Literature Review.” Microsoft. www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf.
  13. Green and Chen 2019b, as cited in www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf.
  14. Kahneman, Daniel, Dan Locallo, and Olivier Sibony. 2011, June. “Before You Make that Big Decision . . .” Harvard Business Review. https://economy4humanity.org/commons/library/biases.pdf.
  15. Read more about navigating blind spots in this article from the author and Caterina Bulgarella, Ph.D., a Leadership Strategy Fellow at the American College Maguire Center for Ethics in Financial Services, at www.theamericancollege.edu/knowledge-hub/insights/build-trust-in-financial-services.
  16. Jacobs et al. 2021, as cited in www.microsoft.com/en-us/research/wp-content/uploads/2022/06/Aether-Overreliance-on-AI-Review-Final-6.21.22.pdf.
  17. In the Matter of the Accounting by Susan F. Weber as the Trustee of the Michael S. Weber Trust for the benefit of Owen K. Weber under Article SEVENTH of the Last Will and Testament of Michael S. Weber, Decedent. File No. 1845-4/B. https://law.justia.com/cases/new-york/other-courts/2024/2024-ny-slip-op-24258.html.
  18. Carbone, Lisa. 2025, December 18. “Advancing Mathematics Research with Generative AI.” https://arxiv.org/html/2511.07420.
  19. See www.finra.org/rules-guidance/notices/24-09; also, Federal Reserve. 2023. “Interagency Guidance on Third party Relationships: Risk Management.” www.federalreserve.gov/supervisionreg/srletters/SR2304a1.pdf
Topic
Ethics
FinTech
Professional Conduct & Regulation