In a recent webinar I delivered, I asked attendees whether they were using AI tools in their work and/or personal life. Out of more than 550 attendees, 73% indicated they were using some form of AI. That result didn’t surprise me. In many firms I work with, AI adoption isn’t driven by a formal strategy. Instead, it’s happening quietly through everyday shortcuts staff use to meet deadlines. This quick poll reinforced what other studies, and my own experience as a CPA and Practice Advancement Coach at Woodard, have already shown: Artificial Intelligence is firmly embedded in the accounting profession.
That reality creates both opportunity and risk.
Despite widespread use, we still lack cohesive, profession-specific guidelines governing how AI should be used in accounting practices. While several states have begun passing consumer focused AI legislation, there is no unified framework to guide firms on acceptable use, oversight, or accountability.
As a result, accounting practices are operating in a temporary “AI Wild West.” Adoption is moving faster than regulation, client expectations are evolving, and insurers are actively adjusting coverage terms to reduce their exposure. Firms that emerge strongest through this “AI revolution” will not be those with the most AI tools. They will be the practices with clear governance, documented standards, and consistent review processes.
AI governance refers to the policies, controls and oversight that determine how AI tools are selected, used, reviewed, and documented within a practice. For accounting practices, governance should not be about restricting innovation, but should focus on protecting quality, confidentiality, and professional judgment.
Without governance, accounting practices face increased exposure in four areas: data privacy, accuracy of outputs, client trust, and defensibility of work product. These risks exist regardless of firm size or specialty.
In the United States, there is no unified “AI law for accounting practices.” Instead, practices must monitor a growing patchwork of state-level legislation alongside emerging federal guidance.
Several states illustrate how uneven the compliance landscape has become:
These are just a few examples and are not intended to turn firm leaders into legal experts. However, they demonstrate a practical reality: even firms located in states without AI-specific laws may be affected through clients, vendors or remote staff operating across state lines. Firms should understand the AI-related rules that apply where they operate, where their staff work, and where clients are located.
At the federal level, Executive Order 14365, issued December 11, 2025, signals an effort to move toward a national AI policy framework. The order emphasizes a “minimally burdensome” approach and directs the U.S. Attorney General to establish an AI Litigation Task Force to challenge certain state laws.
An executive order does not override existing state legislation. Instead, it sets priorities, influences agency action, and increases the likelihood of legal challenges. For accounting practices, this means uncertainty may persist for several years.
AI risk is not limited to compliance. It is a quality and reputation issue that already affects professional services firms.
Generative AI can produce polished, confident language that appears credible while containing fabricated facts (also called hallucinations), citations, or conclusions. Without strong review controls, these errors can reach clients.
These incidents reflect what happens when AI use lacks structure: unclear inputs, insufficient review and no documented accountability.
The AICPA does not recommend banning AI or deploying it indiscriminately. Its guidance emphasizes governance, documentation, disclosure, and professional judgment.
Key principles include:
In practical terms, AICPA guidance points firms toward structured decision-making rather than one-size-fits-all rules.
Most practices do not need an extensive AI manual to get started. A concise governance document can significantly reduce risk and improve consistency. The point is to provide direction and guardrails pertaining to acceptable use for team members and other stakeholders.
Below is a concise framework you can use to get started building a policy for your practice.
|
Risk Area: |
Governance Action: |
Review Standard: |
|
Data privacy |
Prohibit confidential client data in public AI tools |
Periodic compliance review |
|
Tool selection |
Maintain an approved AI tools list |
Annual reassessment |
|
Output quality |
Require human verification of AI-generated content |
Engagement-level sign-off |
|
Documentation |
Record AI use in workpapers when material |
Consistent audit trail |
CPA.com’s security and risk guidance provides additional structure around privacy, validity, transparency and accountability that firms can adapt to their policies. Keep in mind that this is NOT a one-and-done effort. Because the landscape is changing so quickly, ownership should be assigned to an individual or a committee to keep the policy up to date as laws and regulations evolve.
Insurance considerations using AI in accounting work
AI governance does not end with internal policy. Practices should also understand how their insurance coverage responds to AI-related risk.
Verisk has described an ISO general liability multistate filing addressing emerging risks, including generative AI, with a proposed effective date of January 1, 2026. This reflects broader changes in how insurers view AI exposure.
Practices should not assume existing policies will respond as expected. Remember the old adage, “ignorance is never an excuse.”
Ask your insurance broker, in writing, whether your professional liability, cyber and general liability policies include AI-related exclusions or endorsements. Request clarification on how those provisions apply to current services. You should also consider whether to add a clause about use of AI in your engagement letter. It is the responsibility of the practitioner to conduct due diligence to protect itself and its clients by seeking appropriate legal advice when creating these types of clauses.
Implementing AI governance may slow innovation in your practice, but it’s vital to ensure AI use is consistent, reviewable, and defensible.
A practical implementation approach includes:
The current environment may feel like a Strange New World, but firms that establish clear guardrails now will be better positioned as regulation, professional standards and insurance frameworks continue to develop. The June 2026 Scaling New Heights conference in Orlando will provide practical guidance on building a sound AI policy framework, including a dedicated track focused on the responsible and secure implementation of AI in accounting practices.
This article was written with the assistance of AI.