Journal of Financial Planning: May 2025
NOTE: Please be aware that the audio version, created with Amazon Polly, may contain mispronunciations.
Emma I. Foulkes, CFP®, CRPC, is a partner and chief technology officer at Collective Wealth Partners (www.collectivewealthpartners.com) with over 25 years of experience in the financial services industry. She has always been at the intersection of technology and finance, combining her degree in information systems with a passion for innovation and strategic planning. An early adopter of technology, Emma had a computer in her home as early as 1978, which fueled her deep understanding of data security, systems integration, and digital transformation—insights she seamlessly applies to wealth management. Her practice specializes in comprehensive financial planning and investment management for high-net-worth clients, helping them navigate complex financial landscapes with forward-thinking strategies and cutting-edge solutions.
Click HERE to read this article in the DIGITAL EDITION.
In my younger adult years, I was a multi-channel communications operator for the U.S. Army. I had a Secret-level security clearance and was authorized to transport confidential documents. I was assigned to a military intelligence brigade as a driver for a full-bird colonel. I take privacy and security seriously. I’m also a modest lover of science fiction, and I have a healthy admiration for technology. We are now in an age where we can merge more technology into our practices. I want to show you how to stay compliant.
My Introduction and Use of Gen AI
I began using generative AI for my personal home projects in early 2023. Gen AI helped with measurements and food recipe conversions and served as our family movie trivia referee. As much as I enjoyed tinkering with gen AI, I did not incorporate it into my financial planning firm. At the time, FINRA and the SEC did not have guidance. I’m not much of a risk taker; my preference is to let the dust settle and wait and see before adapting new technology for work. I’d rather let others try it out first. I find it important to learn from the mistakes of others.
Today, FINRA, the SEC, and the CFP board have been providing guidance on ways we may want to consider using generative AI. As financial planners, it has been drilled into us to guard our client’s information. Use generative AI cautiously when starting out. Start by having it take notes during client meetings and provide summarized notes. Generative AI can also assist with marketing data and communications. It’s even permissible to use AI for initial investment or planning research. We can coexist harmoniously with gen AI and have an efficient and safe practice if we, at minimum, avoid the following compliance issues.
Compliance Issue 1: Auto-Generated Communications with Clients
Website chatbots, while convenient, could have unintended consequences. Clients may use a chatbot for what they think is a simple question. As planners, we have enough experience to know “simple questions” could require computations and a complex answer. Generative AI is still in its infancy, and we can’t expect it to know the difference between giving generic information or providing advice. There’s a risk of AI misinterpreting a client request and providing what could be interpreted as advice.
AI can also mistakenly divulge sensitive information to the wrong person. If the AI powering the chatbot collects information such as names, email addresses, or phone numbers, it can mistakenly use this data in responses to other clients’ inquiries. AI can also suggest unrealistic timelines or returns.
Planners and firms are responsible for any output from an AI-powered tool. Chatbots used with clients should have simple features like scheduling or basic inquiries. Remember, auto responders should not make statements that are confusing or misleading. Inaccurate messaging can erode clients’ confidence and create potential compliance issues.
Follow FINRA Rule 2210 guidance1 to “provide a sound basis for evaluating the facts ‘in regard to any particular security or type of security, industry, or service’ and include all ‘material fact[s] or qualification[s]’ necessary to ensure such communications are not misleading” (“Regulatory Obligations”).
Compliance Issue 2: Using Gen AI to Replace Financial Planners’ Expertise
The CFP Board wants CFP® professionals to understand that planners are always responsible for all advice and guidance generated by AI. Generative AI is not a replacement for a financial planning professional’s expertise, and should not be used as a replacement for financial planning software, Excel spreadsheets, or other tools approved by firms.
Think of generative AI as a 5-year-old genius. AI is brilliant; it can do mathematical computations and many other things faster than we can. However, in 2025, AI still has limitations. It lacks a financial planner’s worldly experience and a planner’s comprehensive planning expertise. AI is smart but not as smart as we are presently. The data used to train AI models can also cause them to unintentionally learn biases. These biases can provide inaccurate reports and data. Unlike a financial planner, generative AI does not have ethics or experience. AI cannot replace a planner’s critical thinking. It can be used to help but never to replace a financial planner’s expertise.
Compliance Issue 3: Failing to Proofread AI-Generated Meeting Summaries
Using generative AI for taking notes during client meetings is OK, provided the tool has the proper security measures (see issue 7). It’s important to remember that AI can, and will, make mistakes. The financial planner who attended the client meeting should review the AI-generated meeting notes and summary. It’s the planner’s responsibility to make the needed edits. Generative AI’s notes and summaries can contain mistakes or leave out important facts. The summary should be accurate and should not be different from the agreements made with the client during the meeting.
The planner must ensure all the important information has been recorded and any jokes or sarcasm omitted. AI may not understand jokes made during meetings, and you don’t want a joke to end up being a serious recommendation. Quitting your job and moving to Bora Bora should not be a recommended action item because the NY Giants won the Super Bowl. AI misinterpretations could lead to an embarrassing conversation with your client or compliance supervisor. Financial planning professionals are responsible for information in client meeting summaries.
Firms and independent planners should develop easy-to-understand policies for reviewing and approving meeting summaries before sending them to clients. Failing to implement and follow a policy exposes both the financial planner and firm to compliance risks and legal liabilities. Notes that do not have accurate records can be a compliance breach according to FINRA Rule 2210, which states that all client communications have to be fair, balanced, and not misleading.
Compliance Issue 4: AI Washing and Conflicts of Interest
Use the power of AI for good not evil. “AI washing” is a new term similar to the term “greenwashing.” AI washing is done when a company or planner pretends to use or misrepresents their company’s AI capabilities.
In everyday practice, financial planning practitioners present charts, graphs, and reports to clients. Generative AI can be used to assist with compiling and presenting data, but financial planners must review all AI-generated data before presenting it to a client. Inherent biases from training the AI model can lead to AI output that may not be accurate. In some cases, AI can be trained to promote a service or position that favors the planner or firm financially. Don’t use AI-generated results to lead a client to a predetermined outcome. It’s unethical and will lead to compliance issues and fines. Incorrect AI results should never be used to boost a firm’s profitability or ignore conflicts of interest.
The SEC has handed down fines for AI washing. Between March 2019 and October 2023, a pair of investment companies “recognized but failed to adequately address vulnerabilities in their computer-based trading models that could materially impact client returns.”2 The firms agreed to violations of the Investment Company Act, the Investment Advisers Act, and Exchange Act Rule 21F-17(a). The firms agreed to pay a $90 million civil penalty, to cease and desist from any future violations, and to fully cooperate with any additional investigations.
More recently, in March 2024, the SEC fined two investment advisory firms for “making false and misleading statements about their purported use of artificial intelligence (AI).”3 The firms agreed to settle the SEC’s charges and pay a combined $400,000 in civil penalties.
Compliance Issue 5: Ignoring the Firm or Broker–Dealer’s Gen AI Policies
Broker–dealers and RIAs can’t turn a blind eye to generative AI technology. Planners and staff will need guidance on the tools they should or shouldn’t be using for research or entering and storing client information. AI can’t be left to its own devices; it must be monitored. Protect your firm and clients from value manipulations, deception, and possibly impersonations.
Before using gen AI tools, it’s advisable that planners read and comprehend their firms’ policies on generative AI. Ignoring policies will expose the planner and their firms to compliance issues, client complaints, and possible fines from FINRA or the SEC.
If you’re an independent planner, use the SEC’s guidance to develop your own policy or guidelines for using generative AI. Financial planners should keep records of how they are using gen AI tools with client data. The SEC requires registered broker–dealers, investment companies, and investment advisers to “adopt written policies and procedures that address administrative, technical, and physical safeguards for the protection of customer records and information.”
Compliance Issue 6: Ignoring Vendors’ Gen AI Privacy Policies
Planners should use extreme caution when using any type of non-customized generative AI platform. Protect your clients’ data by reading the privacy policies of your AI vendors. If it’s free, let it be—data is a form of currency. Certain gen AI companies may not charge fees; instead, they may accept user inputs as payment. Make sure you have a clear understanding of what they are doing with the data, and when working with client data, stay away from free generative AI tools. How are they storing it? Is it secured? Are they sharing your data with a third party? Not knowing these answers can put the future of the firm or a planner’s practice in jeopardy.
It is also advisable to verify the country of origin of the gen AI developer. Be aware of the relationship the United States has with that country, especially regarding cybersecurity. Visit the official websites of government agencies like the State Department and Federal Bureau of Investigation to see if the vendor or vendor’s country is associated with having bad actors and cyber-crime rings.
When in doubt, refer to the SEC Reg S-P,4 which requires registered broker–dealers, investment companies, and investment advisers to “adopt written policies and procedures that address administrative, technical, and physical safeguards for the protection of customer records and information” (“INTRODUCTION AND BACKGROUND”).
Compliance Issue 7: Using Gen AI Tools that Lack Encryption
Client information should be shielded from any tool that does not have proper 256-bit encryption as a dead bolt on their server doors. The use of 256-bit encryption is the standard for FINRA, the SEC, HIPAA, and several other government agencies. AI tools, and any other tech solution used for client work, should have at least 256-bit encryption. This is why using a tool such as an unsecured chatbot that auto generates information can be troublesome. Remember, the data must be secured to and from your device just like an email with sensitive client information. Client data must be encrypted while in transit and while it’s stored. The financial planner is responsible for unsecured data. When in doubt, use the “Reasonable Use” guidelines to protect client data from unauthorized users.
Generative AI Is a Tool
We’re not that far into a future where we can rely 100 percent on generative AI’s output. Generative AI will not replace financial planners any time soon. It can’t replace our expertise. Financial planners can use gen AI as a support tool to run an efficient and safe practice, but use AI responsibly and follow the above rules. Don’t use AI for shortcuts. Using AI will require processes and procedures to ensure compliant oversight. Planning professionals can enhance and improve our clients’ experience while keeping their data safe and secure.
Endnotes
- FINRA. 2021. 2021 Report on FINRA’s Examination and Risk Monitoring Program. www.finra.org/rules-guidance/guidance/reports/2021-finras-examination-and-risk-monitoring-program/communications-with-public.
- Morrison Foerster. 2025, February 26. “Top 5 SEC Enforcement Developments for January 2025.” www.mofo.com/resources/insights/250226-top-5-sec-enforcement-developments.
- Securities and Exchange Commission. 2024, March 18. “SEC Charges Two Investment Advisers with Making False and Misleading Statements About Their Use of Artificial Intelligence.” www.sec.gov/newsroom/press-releases/2024-36
- Securities and Exchange Commission. 17 CFR Parts 240, 248, 270, and 275. www.sec.gov/files/rules/final/2024/34-100155.pdf.