The Truth About AI in Tax Practice
Before enjoying the benefits, it is essential to understand the risks
I’m excited to welcome technology and data security guru Brad Messner, EA, as a guest writer for Tom Talks Taxes. Brad owns a tax firm and is pursuing a PhD in business with a concentration in information systems. His research focuses on cybersecurity, blockchain technology, and accounting information systems.
Some tax professionals purport to be technology experts and provide technology advice and education. However, data security issues are often omitted or minimized in these areas because they aren’t a sexy topic. I admit this is not a topic I am well versed in; Brad is my go-to person for my tech and security questions.
A significant number of tax professionals believe data security best practices are excessive and ignore inconvenient recommendations. Our firms handle sensitive information that, in the wrong hands, could cause severe financial damage to our clients. A data breach could end your business and financially ruin you. I’m not saying this to scare you; this is simply the reality in which our firms operate.
Brad’s message is urgently needed. He created Financial Guardians (which provides both free information as well as paid education and services) to provide accurate and timely information about these issues to the tax and accounting community. I hope you enjoy the article and take action as needed in your firm. -Tom
There has been much discussion lately about using artificial intelligence (AI) in tax preparation, planning, and client communication; however, there has also been some very misleading and misguided commentary around the security implications of its use. AI can benefit a tax firm from many angles, but the balance between unwarranted fearmongering and overzealous app integrations poses greater threats than a well-planned and implemented security plan.
An article recently published by a continuing education provider expressed concerns about Microsoft’s recent rollout of Copilot, its AI companion. Similarly, over the last several months, concerns have been raised about other AI platforms, including ChatGPT and Google Gemini.
Are AI platforms secure? Is there a risk for a firm to use AI? Is there a Circular 230 concern with AI security? As with all topics in tax preparation, the truth lies in the middle.
How AI Works
To properly understand AI's security implications, we first need a basic understanding of how it works. Artificial Intelligence platforms operate by consuming large amounts of labeled data and applying complex algorithms to detect patterns and correlations. These patterns then allow the system to make predictions about future events, with the ultimate goal of mimicking human decision-making and actions.
The first point of importance is the labeling of data. These labels provide context for pattern detection. After reviewing hundreds or thousands of labeled data feeds, patterns emerge. This concept is called training. This could be compared to a gigantic version of Win, Lose, or Draw – the closer the data matches known patterns, the easier it is to identify. For example, if you saw twenty pictures of a dog, you could quickly identify a dog if shown a picture; however, you would need a much larger dataset to differentiate different breeds.
A second important point is the accuracy of the data. If large volumes of data are fed into the system with a high level of accuracy, pattern recognition has a higher chance of being well correlated. If the data is not accurate, then pattern recognition will be hampered. This means that if the tax return data from thirty firms were imported, the accuracy of the AI model would be based upon the accuracy of those firms.
Finally, each AI system uses unique and proprietary algorithms for pattern recognition and correlation calculation. This programming is, essentially, what separates many of the existing platforms from each other.
AI Security
While it is not something many people think about, different security configurations are designed for AI platforms. Open AI are platforms where coding, algorithms, and even data can be openly shared from one user to the next or between organizations. This does allow for a significant increase in the amount of data collected and, assuming the incoming data is valid, allows the system to create more powerful patterns and connections. Closed AI, on the other hand, focuses on a security-first mentality and limits the amount of shared data or algorithm design. While closed platforms provide a step up on security, the amount of data imported into those systems can be limited or stunted.
However, modern AI systems do allow for a hybrid approach that pulls data and algorithms from an open platform to couple with the still-closed data connected to the closed system. While this makes a lot of sense from a security standpoint, my platforms view it as ‘selfish’ and typically charge a premium or throttle usage.
One key deciding factor when selecting an AI platform is whether it is open or closed. One final concern often overlooked when considering the security of AI systems is the security protocols of the hosting or providing supplier. Microsoft and OpenAI have both openly acknowledged data breaches within the last year. The breach's impact is no different if the system is open or closed; data is data for the attackers.
Circular 230 and More
If we look to the IRS for guidance, one of the first documents typically referenced is Circular 230, which requires practitioners to “…exercise due diligence in preparing returns…” However, many fail to recognize we are provided additional guidance through §7216 regarding willful and reckless disclosure of tax return information and even Publication 4557, Safeguarding Taxpayer Data, which outlines in significant detail the depths practitioners are to go to secure and protect the data they have been entrusted with. However, none of these documents take a direct stance on the usage of AI.
Instead, the IRS's approach is that data must be encrypted and secured, passwords must be strong, backups should be performed, and access should be limited. Beyond those and a few minor operational items, the IRS's takeaway is that practitioners must exercise caution in selecting the software and platforms they use. This accounts for AI systems and tax preparation software, portals, data storage, and all other tools used by tax professionals.
FTC Safeguards
Of equal compliance concern to the Internal Revenue Service, the Graham-Leach-Bliley Act and, subsequently, the FTC Safeguards Rule provide greater depth and significantly more security requirements. The FTC Safeguards Rule was first implemented in 2003, putting financial institutions under obligation to protect personally identifiable information (PII). Nearly two decades later, in 2021, a significant overhaul was rolled out with an implementation deadline, after an extension of June 2023. These changes require financial institutions to implement a comprehensive security plan with specific requirements. Failure to comply could result in thousands of dollars in fines or even risk of prison time.
One of the FTC Safeguards Rule requirements expects practitioners to monitor their service providers; these service providers include any software, tools, or platforms with access to PII to be monitored, assessed, and selected for suitability. The selection of service providers requires a thorough review of their security practices, spelling out expectations, monitoring methods, and a schedule to reassess. The Designated Qualified Individual (DQI) is required to assess these service providers periodically for suitability. Each service provider is required to maintain their own security plan.
Any AI platform selected must undergo a rigorous review by the DQI before implementation. This process can take significant time, and the platform should not be implemented until a sufficient passing evaluation can be obtained. Unfortunately, many practitioners get overzealous about new products or features and implement them before an exhaustive review, placing all of their data or processes at risk.
This last point is addressed in a recent continuing education provider article about Microsoft Copilot. The article correctly stated that Microsoft Copilot should not be implemented until a firm’s security team can review and properly assess security needs. However, the same article did not provide adequate grounds or corrective responses. In the case of the author, they ignored multiple notifications to their Microsoft 365 Administrator providing instructions to disable Copilot for their organization, and, as such, the platform was rolled out per Microsoft’s publicly announced schedule. If the author had stayed current on the change in settings, they would have had ample time to turn off the feature.
Trusting Big Tech
One of the outstanding questions that practitioners must address is their trust in technology providers. There is enough data on both sides of the debate to determine whether the self-declaration of technology companies' compliance is sufficient or whether practitioners should conduct their own assessment. Upon reviewing the FTC Safeguards Rule, it is expected that the DQI should conduct a personal assessment and not merely accept a marketing website stating compliance.
In fact, in many cases, service providers have been proven to understate the risks associated with a breach. We have seen this example with Microsoft in recent months, LastPass in 2022, and even industry-specific software applications. These organizations may downplay the risks of a breach or place a marketing spin, reducing the implied risk. Firms should assess risks individually or partner with an organization to provide unbiased third-party support.
Beta Testing
Most technology platforms, especially those implementing AI, have active alpha and beta programs. While it can be exciting to be the first to access new features, many of these beta programs have not been sufficiently tested and can place a firm’s data at risk. Unless a practitioner has a sufficient technology background or a strong partnership with an IT provider, they should not participate in beta programs. There have been actual data breaches caused by service providers in this industry based upon expedited and improperly executed beta programs.
AI Worms
Despite AI's headlines on nearly all news and media sites, little about malware designed to attack these systems has been covered. Researchers created the Morris II computer worm that targets AI systems earlier this year. This worm intends to ‘burrow’ through AI systems, collect data, and communicate that data to third parties. Morris II operates on both open and closed AI platforms. While this is the first big push of powerful malware targeting AI, it will surely not be the last. The next round of cybersecurity threats will be implementing AI to target and expedite vulnerability exploitation.
The Double-Bind
With the increased opportunities and efficiencies that AI can provide and the potential security risks connected, practitioners are left with a double-bind. Implemented correctly and with proper safeguards and assessment, AI can be an excellent tool for the industry. However, without the assistance of a technology provider with a strong knowledge of this industry, the risks associated with improper configuration can outgrow the benefits. If practitioners want to implement AI, they should take sufficient continuing education to understand the technical aspects of these tools, not just an ethics class that talks about legalities, and invest in software and hardware to support the needs of artificial intelligence properly.
Learn More from Brad
Join Financial Guardians today for up-to-date security and technology information. There are both free and paid memberships as well as security-focused education.
Brad did a 2 CE/CPE overview on Data Security for the Tax Business for Compass Tax Educators.
Share Your Thoughts!
As a paid subscriber, you can discuss this topic in the comments section. Please keep the discussion related to this edition’s topic.
How secure do you think QBO is?