The Woodard Report

How to Argue with AI (and Why!)

Written by Brandy Jordan | Mar 26, 2026 7:03:23 PM

Most professionals treat Artificial Intelligence (AI) like a search engine. Ask a question, read the answer, move on. Some AI tools do search the web. But searching and answering are two different things, and the answer is always generated, always confident, and occasionally wrong in ways that are not obvious until the damage is done.

I built out a workflow board, listed every column, and mapped exactly how items move through the process. Then I asked the AI tool I was using to turn that into an SOP. What came back was polished, professional, and wrong. It included steps for creating tickets in a system I never mentioned, added an item type that did not exist, and built an entire workflow for that fictional type. AI did not randomly hallucinate. It pattern-matched to what a workflow SOP typically looks like, filled in what it assumed was missing, and handed it back with complete confidence. That is the part worth paying attention to.

AI does not just get things wrong occasionally. It gets things wrong confidently, and the research shows that heavy use of these tools gradually erodes the critical thinking skills that make accounting professionals valuable in the first place. Cognitive offloading quietly weakens the judgment it is meant to support. If you want the full case for why, my earlier piece on AI and critical thinking covers the research. This article is the next step. Not the why, but specifically what to do about it.

There is a documented behavioral pattern in AI called sycophancy, which is baked into how these models are built. During training, models learn to prioritize responses that feel satisfying and agreeable because humans consistently reward them. Over time, that means the model optimizes for telling you what you want to hear rather than for accuracy. A 2025 study called SycEval tested this directly and found that once a model starts agreeing with a user's incorrect position, it maintains that alignment across 78.5% of subsequent responses. It does not catch itself and readjust. It keeps going in the wrong direction because you seemed confident. OpenAI discovered this the hard way when it had to pull back a GPT-4o update in May 2025 after users reported that the model had become so agreeable it was validating bad decisions and reinforcing false information rather than correcting them. In accounting, agreeable and accurate are not the same thing.

MIT researchers found that, over time, AI increasingly mirrors your perspective rather than challenging it, and most users never notice because the output still looks complete. Nothing flags itself as missing. For professionals whose clients are paying for independent judgment, that mirroring effect is costly. The professionals most exposed to this use it constantly and have stopped noticing they stopped questioning it.

So, how do you argue with AI?

Ask AI to show its work

When AI gives you a procedure, a recommendation, or an analysis, the first question to ask is why. Yes, this is the part where I encourage arguing with it, because the reasoning behind an answer tells you more than the answer itself. Ask it to walk you through the logic. Ask what assumptions it made. If the reasoning does not hold up, the answer does not hold up, and you will catch that in thirty seconds by asking rather than in three hours after something goes wrong downstream.

In my workflow example, one follow-up question would have exposed the whole problem. Where in what I gave you does this additional system appear? There was no good answer because I never mentioned it. That is the point.

Challenge it with a different approach

AI defaults to common patterns. It gives you the most statistically likely answer based on everything it has been trained on, which means it gravitates toward the average rather than the specific. If you have a different approach in mind, say so and ask it to analyze both. Ask it to argue against its own recommendation. Ask what would have to be true for your approach to be better.

What you are testing is whether the AI can engage with your actual situation or whether it is just pattern-matching to a template. If it immediately abandons its recommendation the moment you push back, that tells you the recommendation was not well-grounded to begin with. A useful response either defends the reasoning or explains what changed.

Ask for the source

When AI cites something, ask for the source. AI cannot always produce one, and when it does, verify it before you use it in client work. Asking forces the model to distinguish between what it knows with certainty and what it is inferring. If it cannot point to anything particular, you now know the answer requires more verification before you act on it. For accounting professionals, this applies especially to regulatory, tax-related, or jurisdiction-specific matters.

Hold your position

When you push back on an AI response, pay attention to what happens next. If it softens, qualifies, or reverses without any new information entering the conversation, the revised answer is worth exactly as much as the first one. Your tone is not evidence. AI folds when you express doubt because it is built to keep you satisfied, not to keep you accurate. Knowing the difference between a model that changed its mind and a model that just gave up is the whole skill.

None of this is about prompting. Prompting is how you ask the question. Interrogating the answer is a different skill, and it is the one that protects your clients. AI is a quick, confident first draft created by something that has read everything but knows nothing specific about your client, your firm, or your situation. It will not flag what it misses. It will not tell you when it pattern-matched instead of reasoned. That is your job. The output appears finished because it is designed to look that way. Your value lies in knowing the difference.