Banner image for Scaling New Heights 2025, the premier accounting technology conference in the United States. The image features the conference theme and dates.
 

AI Governance for Accounting Firms: Risk & Rules

Heather Day Satterley
Posted by Heather Day Satterley on Feb 4, 2026 11:44:10 AM

In a recent webinar I delivered, I asked attendees whether they were using AI tools in their work and/or personal life. Out of more than 550 attendees, 73% indicated they were using some form of AI. That result didn’t surprise me. In many firms I work with, AI adoption isn’t driven by a formal strategy. Instead, it’s happening quietly through everyday shortcuts staff use to meet deadlines. This quick poll reinforced what other studies, and my own experience as a CPA and Practice Advancement Coach at Woodard, have already shown: Artificial Intelligence is firmly embedded in the accounting profession.

That reality creates both opportunity and risk.

Despite widespread use, we still lack cohesive, profession-specific guidelines governing how AI should be used in accounting practices. While several states have begun passing consumer focused AI legislation, there is no unified framework to guide firms on acceptable use, oversight, or accountability.

As a result, accounting practices are operating in a temporary “AI Wild West.” Adoption is moving faster than regulation, client expectations are evolving, and insurers are actively adjusting coverage terms to reduce their exposure. Firms that emerge strongest through this “AI revolution” will not be those with the most AI tools. They will be the practices with clear governance, documented standards, and consistent review processes.

Why AI governance matters for accounting firms

AI governance refers to the policies, controls and oversight that determine how AI tools are selected, used, reviewed, and documented within a practice. For accounting practices, governance should not be about restricting innovation, but should focus on protecting quality, confidentiality, and professional judgment.

Without governance, accounting practices face increased exposure in four areas: data privacy, accuracy of outputs, client trust, and defensibility of work product. These risks exist regardless of firm size or specialty.

The current legal landscape? There is no single AI rulebook

In the United States, there is no unified “AI law for accounting practices.” Instead, practices must monitor a growing patchwork of state-level legislation alongside emerging federal guidance.

State-Level AI laws affecting accounting and bookkeeping firms

Several states illustrate how uneven the compliance landscape has become:

  • Texas: Effective January 1, 2026, the Texas Responsible Artificial Intelligence Governance Act (HB 149) regulates certain AI system uses and introduces potential civil penalties. Firms operating in Texas or serving Texas-based clients should treat this law as part of their risk environment.
  • Utah: Utah amended its AI framework through SB 226, shifting its focus toward high-risk generative AI interactions and clarifying disclosure expectations in defined circumstances.
  • Colorado: Colorado’s high-risk AI law (SB 24-205) had its effective date extended to June 30, 2026, through follow-on legislation SB25B-004.

These are just a few examples and are not intended to turn firm leaders into legal experts. However, they demonstrate a practical reality: even firms located in states without AI-specific laws may be affected through clients, vendors or remote staff operating across state lines. Firms should understand the AI-related rules that apply where they operate, where their staff work, and where clients are located.

Federal signals and Executive Order 14365

At the federal level, Executive Order 14365, issued December 11, 2025, signals an effort to move toward a national AI policy framework. The order emphasizes a “minimally burdensome” approach and directs the U.S. Attorney General to establish an AI Litigation Task Force to challenge certain state laws.

An executive order does not override existing state legislation. Instead, it sets priorities, influences agency action, and increases the likelihood of legal challenges. For accounting practices, this means uncertainty may persist for several years.

AI risk is not theoretical

AI risk is not limited to compliance. It is a quality and reputation issue that already affects professional services firms.

Generative AI can produce polished, confident language that appears credible while containing fabricated facts (also called hallucinations), citations, or conclusions. Without strong review controls, these errors can reach clients.

Case examples highlighting AI risk

  • Deloitte Australia: Deloitte agreed to partially refund the Australian government after errors were discovered in a report that included fabricated quotes and references. Public reporting indicated that generative AI tools were involved, raising concerns about review and controls,  but Deloitte did not necessarily say that every error was caused by AI.
  • Academic Submissions Using AI: In a separate incident, academics apologized after using Google Bard to generate false case studies alleging misconduct by large accounting firms. The claims were entirely fabricated and required formal correction.
  • Legal filing with AI-generated citations (Mata v. Avianca, U.S.): In a widely cited sanctions decision, attorneys submitted a court filing that included non-existent cases and fabricated quotes generated by ChatGPT. The judge imposed sanctions, underscoring that AI output must be independently verified before submission and that accountability sits with the humans signing off.

These incidents reflect what happens when AI use lacks structure: unclear inputs, insufficient review and no documented accountability.

What AICPA AI guidance is really saying

The AICPA does not recommend banning AI or deploying it indiscriminately. Its guidance emphasizes governance, documentation, disclosure, and professional judgment.

Key principles include:

  1. Treat AI as a risk issue, not a novelty. Generative AI introduces confidentiality, privacy, accuracy, and security risks that require ongoing oversight.
  2. Embed AI rules within existing security policies. Acceptable AI use should be documented alongside IT and information security standards.
  3. Apply disclosure based on context. There is no universal disclosure requirement. Decisions depend on how AI is used, what data is involved, and client expectations.

In practical terms, AICPA guidance points firms toward structured decision-making rather than one-size-fits-all rules.

A simple AI governance framework for accounting firms

Most practices do not need an extensive AI manual to get started. A concise governance document can significantly reduce risk and improve consistency. The point is to provide direction and guardrails pertaining to acceptable use for team members and other stakeholders.

Five questions every AI policy should answer

  • Data rules: What information may be entered into AI tools, and what is prohibited? This typically includes client identifiers, tax returns, payroll data, banking information, and credentials.
  • Approved tools: Which AI tools are permitted, and who is responsible for approving new ones?
  • Human review: What work must be reviewed by a qualified professional before being delivered to a client?
  • Documentation: When should AI use be noted internally, and what audit trail is required?
  • Ownership and training: Who maintains the policy, updates it, and ensures staff are trained?

Practical Governance Checklist

Below is a concise framework you can use to get started building a policy for your practice.

Risk Area:

Governance Action:

Review Standard:

Data privacy

Prohibit confidential client data in public AI tools

Periodic compliance review

Tool selection

Maintain an approved AI tools list

Annual reassessment

Output quality

Require human verification of AI-generated content

Engagement-level sign-off

Documentation

Record AI use in workpapers when material

Consistent audit trail

CPA.com’s security and risk guidance provides additional structure around privacy, validity, transparency and accountability that firms can adapt to their policies. Keep in mind that this is NOT a one-and-done effort. Because the landscape is changing so quickly, ownership should be assigned to an individual or a committee to keep the policy up to date as laws and regulations evolve.

Insurance considerations using AI in accounting work

AI governance does not end with internal policy. Practices should also understand how their insurance coverage responds to AI-related risk.

Verisk has described an ISO general liability multistate filing addressing emerging risks, including generative AI, with a proposed effective date of January 1, 2026. This reflects broader changes in how insurers view AI exposure.

Practices should not assume existing policies will respond as expected. Remember the old adage, “ignorance is never an excuse.”

A practical action step

Ask your insurance broker, in writing, whether your professional liability, cyber and general liability policies include AI-related exclusions or endorsements. Request clarification on how those provisions apply to current services. You should also consider whether to add a clause about use of AI in your engagement letter. It is the responsibility of the practitioner to conduct due diligence to protect itself and its clients by seeking appropriate legal advice when creating these types of clauses.

Moving from policy to practice

Implementing AI governance may slow innovation in your practice, but it’s vital to ensure AI use is consistent, reviewable, and defensible.

A practical implementation approach includes:

  • Drafting a short AI governance policy
  • Reviewing it with leadership and risk stakeholders
  • Training staff on acceptable use
  • Updating the policy periodically as laws, tools, and client expectations evolve

The current environment may feel like a Strange New World, but firms that establish clear guardrails now will be better positioned as regulation, professional standards and insurance frameworks continue to develop. The June 2026 Scaling New Heights conference in Orlando will provide practical guidance on building a sound AI policy framework, including a dedicated track focused on the responsible and secure implementation of AI in accounting practices.


This article was written with the assistance of AI.

Topics: Finger on the Pulse, Technology Advisory


 

Sign up and stay plugged into the education, news pieces and information relevant to you.

Subscribe to The Woodard Report today! 


Do you have questions about this article? Email us and let us know > info@woodard.com

Comments: