Words matter: Why the term “manual testing” isn’t accurate

Manual gear shift in a car
Photo by: Dennis Eusebio / Unsplash

In the role of software testing and QA, we often hear the phrases: manual testing or manual tester. Why? Is it an accurate term? Is there a better term?

Dualism in testing

In all my experience, education in QA, and in my own research, I can’t find a single person who coined the term “manual” for testing as a task (e.g., manual testing) or as a role (e.g., Manual QA Analyst/Engineer).

What seems the most likely of possibilities is that the term got popular once test automation became standardized over the last 20 years! One is defined by the other. They coexist because of each other. It’s like you can’t define “good” or know it exists unless you also have an opposite, like “bad.” You know what “good/right” is only in relation to what is “bad/wrong,” and vice versa. You know what “manual testing” is only because of “automation testing” and vice versa. In philosophy, this is known as dualism.

So “manual” and “automation” are inherently tied together. Before test automation, as an industry, we only performed hands-on testing of software, which was done for many decades! Testing outside of software, in other forms of science and engineering, has been done for centuries in a hands-on approach! 

The war of words

Words shape perception. The result of using “manual” in testing has created a war of words, a division within the role, a division between roles. It’s a term that creates a false sense of what testing is and what a tester does. It’s a term that can be used as an insult or pejorative, to prop up other people/roles, a form of gatekeeping, to denigrate the role and skills of testing, to undervalue the role and skills it takes to perform the role. Over time, a stereotype has been created and is used throughout the industry.

Division

What does division look like? When you pit a non-automated tester against one that solely does test automation, you are saying they are not equal, that one is above the other. This creates segregation among the team. It shows that one is more valued, the other is less valued. One is paid more, one is paid less. One is given more opportunity to advance, to learn, the other is not. It gives permission for one group to say they are better than the other. People in these roles can feel that judgment and they know their work is seen as less valued than another. This does not create a positive, healthy work environment – it creates an environment of “other-ism.” If you’ve ever been labeled as an “other,” then you know how detrimental it feels.

This same division may exist between developers, DevOps, designers versus any form of tester. Are any of these roles better than QA/testing? No! It seems that testers tend to always be seen as the lowest, least important part of the engineering process. A perception not reality.

Why are we dividing testing, both role and skill, in this way and not other roles? Have you ever seen a “manual developer” versus an “automated developer?” Or, “manual DevOps” versus “automated DevOps?” “Manual UX designer?” “Manual product managers?” “Manual engineering managers?” Do UX designers, product managers, engineering managers do any automation? We know these roles don’t automate anything, so using the word “manual” wouldn’t make sense, so we leave it off. This goes back to dualism.

What about a software engineer? If they are using AI coding assistants, doesn’t that mean they are now “automation software engineers?” If the answer is “yes,” are we going to use this as a new title? Are we going to lessen their value when they don’t use a coding assistant for their work? If the answer is “no,” you’re likely thinking that an AI coding assistant is just another skill or tool a software engineer uses. This is the correct way to think about this and it needs to be applied to testers!

Another notable example is when driving cars. You have the same skill when driving a manual transmission versus an automatic transmission. The only difference is how the gears are changed! Everything else about the experience of driving is the same!

Complementary Skills

Unlike in dualism where two elements are seen as opposing, manual testing and automation testing are not opposing forces! They are complementary skills and tools an engineer uses to complete their work. Complementary is “and” style of thinking, not “or” style of thinking. We need more “and” style of thinking!

When a tester automates test cases, they are using a complementary skill to achieve their goals. They are showing that one approach won’t solve all the problems or find all the bugs. Automation is highly useful for repetitive tasks – a great use for computers! 

As software gains more features, it gets more complex. Automation is used so that a tester can use their skills of intuition, creativity, logic, empathy, observation, critical thinking, etc, to analyze, create test plans/scenarios, evaluate risk, and ask questions. You are showing that you understand the context of when to use your skillset. These are skills that require humanness, skills that can’t be passed off to a computer.

A better term: Experiential Testing

If we should stop using the term “manual” in either tasks or roles, what should we replace it with?

For the tasks of testing, other people have used terms like exploratory, hands-on, interactive, transformative, ad-hoc, or V&V, short for Verification & Validation, a term that is older and less used today. Or they might refer to a type of testing like functional, black/white-box, acceptance, regression, etc. There are too many skills (ability) to list here, and as you see, they often get confounded with technique (method)! None of these really promote what the real technique is when replacing the term “manual testing” — an irreducible term.

For the role of testing, it can be as simple as leaving “manual” off the title: QA Analyst/Engineer, SDET, Quality Engineer, Software Tester, Quality Advocate, or adding a level like QA Analyst III or Lead/Staff/Principal QA Engineer. These promote a real respect for the role!

The term I tend to use is “experiential testing.”  

Experiential Testing: a testing technique based on a person’s knowledge, intuition, observations, perceptions, and perspectives that informs requirements analysis, test case design, and usability of software.

Let’s break this down. 

We all come to the role or task of testing with previous knowledge from living life, using other products and services, formal and informal education, and interactions with other roles and people. This knowledge is invaluable to the role of testing!

We also rely on intuition, gut feeling, emotions, cognition, and comprehension when using software. Our past experience of using other software informs us on how a specific feature (e.g. a login page) is supposed to work. We know when something “feels right” or “feels off.” We know when software makes us happy (playing games), provides satisfaction (accomplishing a task, buying from an ecommerce store), or makes us angry (slowness, error messages, paying bills). When it comes to this type of experience during testing, we need to be able to translate the intangible into something understandable.

Using software is a psychological and sensory process. During Experiential Testing, we are using our human psychology and senses to observe the interactions! We rely on empathy and compassion to understand how many different types of users interact with the software (accessibility, localization, internationalization). The fields of User Experience (UX) and Human-Computer Interaction (HCI) are rooted in psychology, and there are often quite a lot of theories/techniques from those fields we can utilize in QA!

Experiential Testing isn’t just about the final software product. It can be used during:

  • Story/requirements writing and analysis (Are there missing requirements? Do you understand the purpose of the requirements?) 
  • Acceptance testing
  • When writing test cases (have you captured all product states, edge cases, corner cases?)
  • Usability testing (put yourself into the shoes of another person)
  • Root cause analysis when you find bugs and apply their fixes. 
  • When looking at design mockups to guide your test case design. Mockups may not be interactive, but you can play out the scenarios in your mind and see how the puzzle fits together!

Certainly, any phase of software development can benefit from Experiential Testing.

We often only think about the final user’s experience when testing software, but Experiential Testing can also play a part with people/roles who are coding, designing, supporting, and troubleshooting the software.

How does this differ from Exploratory Testing? Typically, exploratory testing is unscripted/unguided/unstructured, time-boxed, with the goal of understanding the product quickly. We can say that Exploratory Testing is a subset of Experiential Testing.

Takeaway

The words we use matter. Words have specific meanings, they have power, they evoke emotion. What we learn from Spiderman is that “with great power (of words) comes great responsibility (to use them).” Let’s choose our words wisely and ensure we approach our understanding with empathy, compassion, equality, and equity. It’s how we encourage respect in our workplaces and industry.

Leave a Comment

Your email address will not be published. Required fields are marked *


The reCAPTCHA verification period has expired. Please reload the page.

Scroll to Top