AI Gives You Volume. Accuracy Is Your Job.

Everyone has an opinion about AI right now.

Some good, some bad. A lot of assumptions and aspirational thinking right now. But are people asking the right questions about AI?

Will AI replace my job?”

Use AI or else!”

What happens if AI fails?”

I’ve decided that since AI is here to stay, I need to learn about it. And like any good Quality Engineer, I need to know: What good can it do? Where does it fail? Where are its boundaries and limitations?

There are plenty of AI tools to choose from. While there’s some overlap, they all seem to have different purposes, strengths, weaknesses. I’ve been experimenting with CoPilot and Claude. Right now, it’s mostly about exploration and curiosity, not replacing any skill.

Here’s what I’ve seen…

AI Gives You Volume

Feed a prompt into any AI tool and it returns output. Fast! Confident! Comprehensive, at least on the surface. Ask it anything and it produces…something! It has to. I have yet to see AI reply, “I’ve got no response to provide.” Or, “I’m unable to complete that request.”

A single sentence prompt often generates 2-4 pages of output in seconds!

Prompt: “What a beautifully raining day!”

Response: A confirming intro. Here’s 6 reasons on how to enjoy a rainy day, why rain is important, or it holds up a mirror of your own thoughts about rain — highly formatted with headers, bullets, spacing. The outro, filled with prompt suggestions.

If you copy the response to Google Docs or MS Word, you’ll see 2-4 pages of content! The examples it gives are probable, maybe not accurate for your situation, maybe some nonsense.

AI Is Fast!

Anyone who has used an AI tool has seen its response speed. Blazingly fast! That’s the biggest promise of AI tools. 

“You can do anything you want at the speed of thought.”

Graphic design? Here’s an image.

Want to write a book? You give AI the story idea and it outputs a fully formed novel.

Coding a new app? Done. Here’s a launch plan.

Does it matter if you have any of the skills AI is replacing? For many, no. They care about what more than how or why something is created. Incorrect output is easily thrown away —- you can just start over. Speed of completion is what they care about.

People who don’t know the rules, the idiosyncrasies, the nuances will believe AI is perfect! They won’t know that they need to use Git or that AI can upload secret keys to the public. Is it scalable? Missing boundary validations? Do you know the edge cases? Is it performant, secure? Vibed AI code is ripe with bugs!

This is a slippery slope! And it’s why Quality Engineering is more important today than it’s ever been!

Accuracy Is Something Else Entirely

Accuracy isn’t about the amount of output created.

Accuracy is understanding why the output is needed, why it’s being created, what risk is being mitigated, what is the purpose of the creation, what happens when it fails?

Accuracy comes from knowing what actually matters to the business — not what the requirements say, but what breaks in production. It comes from understanding how a confused first-time user behaves, not how a happy-path user behaves. It comes from seeing how a bug in one component cascades through a system.

AI doesn’t know any of this about your product. It can’t. It has no context for how your system fails in the real world.

What AI produces without appropriate context is structure-shaped volume. It has the appearance of accurate. It’s missing the understanding that makes it real.

It’s like asking a child, “What’s 2+2?”

Response: “Carrot!”

Confident, but wrong.

The Two Mindsets of Using AI

There are two fundamentally different perspectives at play.

AND vs OR

Craft vs Output

The OR perspective: Results OR process. Output OR understanding. Speed OR accuracy. Any output at all costs.

“Why learn a skill if AI can do it?” “Why understand a process if AI can handle it?” “Why develop expertise if the tool produces the output?”

For OR-minded people, AI looks like magic. You type a sentence, you get pages, images, code. The output ships to production. Done.

They’re not wrong that AI produces output. They’re wrong about what output means.

The AND perspective: Results AND craft. Output AND understanding. Speed AND accuracy. The correct output.

“It’s expertise AND efficiency.” “It’s judgment AND experience to reach better decisions.” “It’s the tool AND the understanding of what the tool produces.”

For AND-minded people, AI can be genuinely useful — because they have the foundation to evaluate what it produces. They can see the gaps. They can catch the hallucinations. They know what actually matters and why!

Craft, judgment, expertise — these aren’t obstacles to using AI. They’re prerequisites.

What AI Cannot Produce & How It Applies to Testing

I want to be specific here because it matters. AI cannot produce everything!

AI can produce text, code, images, test cases, summaries, documentation, and a hundred other artifacts. That list is growing.

But, it cannot create or replace humanistic skills, like:

Discernment — the ability to know which thing matters most in a specific context, for a specific person, in a specific moment.

Judgment — the accumulated pattern recognition that comes from watching systems fail in real ways over years of real work.

Intuition — the sense that something is wrong before you can articulate why. The feeling that something feels off even if you can’t see it.

Empathy — the understanding of what a confused, frustrated, or first-time user actually does with your product. Not what they’re supposed to do. What they actually do.

Psychology — the knowledge that a button label can be technically correct and psychologically wrong simultaneously. That friction isn’t always a bug in the code. Sometimes it’s a bug in the design of the interaction.

Original insight — the ability to see multiple, divergent ideas cross-pollenate between different disciplines. Being original is not in the AI programming, just sophisticated copying.

These aren’t soft skills. They’re human skills! They’re the substance of quality engineering. They’re what separates testing that finds bugs from quality engineering that prevents the conditions that produce bugs. They’re the forward-thinking originators who invent new techniques, new tools.

AI amplifies what you already understand. It cannot substitute for knowledge you don’t have.

Working in the AI Era

Own your craft. Deeply.

You can be skeptical. You can be bearish. You can question. These are good traits for Quality Engineers.

But to resist? That’ll keep you stuck. Using AI is part of the craft now. Use your knowledge and experience of your craft as the foundation that makes AI useful.

Understand testing at a structural level. Not just execution. Strategy. Risk modeling. User psychology. System architecture. The business outcomes. The economics of quality. How defects compound. 

Let’s not use AI to compound the $2.41 trillion that’s annually spent in the US on poor quality software.

The testers who survive and thrive in an AI world aren’t the ones who resist. They’re the AND thinkers — the ones who bring the accuracy that gives AI meaning.

The goal shouldn’t be to use AI as a replacement. The goal should be for AI to help you in your craft, to assist you in learning new skills, to find ways to be efficient. Test automation didn’t replace QA Engineers. AI shouldn’t replace us either.

AI gives you a volume of output. Output requires understanding. Understanding requires craft. Craft requires humans who give a damn!

That’s not an argument against AI.

It’s an argument for taking your craft seriously.

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top