The Ideal Test Case

Software Testers fixate on the difference between the best and the real. The most obsessed testers focus on the difference between the best and the ideal. Regression test cases are the tests that make sure the application behaves according to specification and hasn’t changed since the last time it was checked. I’ve spent my life chasing the ideal regression test case — the ideal that can be run by anyone, on any app, and people care whether it passes or fails.

The Ideal Manual Test Case

The ideal manual test case is one that is written in a way that is easily read by anyone, people care when it fails, and mixes specificity and ambiguity in a way that makes it robust across insignificant changes in the app.

The ideal manual test case is so clearly written that it doesn’t need the author to interpret how to setup, execute and validate the test. Almost always, the original author of the test eventually moves on. The best manual tests can be read executed by anyone on the test team, their test manager, the developer or even product manager in a pinch. Even great tests are written just to add more ‘coverage’ —but that isn’t ideal.

If no one cares when a well-written test case fails it isn’t ideal. Sometimes tests are written for features that aren’t important to the business, users, or future direction.

Ideal manual test cases withstand the test of time. Ideal doesn’t mean fully specified or overly specific, ideal tests focus on the purpose of the test and nothing else. If a setup, execution, or validation, step isn’t relevant — it shouldn’t be in there. This perfect level of ambiguity ensures the test doesn’t need to be updated when the product changes in irrelevant or insignificant ways, and allows other testers to add some variety and interpretation to the test case delivering additional variation with the same spirit of the author.

The ideal manual test case is clear, important, and balances specificity and ambiguity — but, it isn’t the ideal test case. Ideal manual test cases are expensive and slow to write, execute and maintain. Manual test cases require human time and labor so they are expensive and slow. Even ideal manual regression test cases take time away from the most valuable asset testers bring to bear — their ability to explore the app with creativity

Manual test cases are great for teams to get started with formal testing and add some rigor to the regression testing process, but the ideal manual test case isn’t the ideal test case. The ideal test case should also be automated.

The Ideal Automated Test Case

The ideal automated test case should have all the attributes of an ideal manual test case, but have the added benefit that it is automated by a machine. Done right, machine automation is far faster and less expensive and consistent than human testing. If the ideal Manual test case was automated, it can be run on every new build and free up human testers’ time for doing what they do best: exploratory, opinionated, and abstract quality checking keeping the business, customer, and engineering needs in their mind all at the same time.

The ideal automated test case is stable, maintainable and and efficient. Test automation has to be stable or it is quickly ignored. Unstable, flaky, inconsistent automation often uses more human time investigating false failures and nursing the automation back to health than simply executing the tests manually. Unstable automated tests are the norm — everywhere. Ideal tests can be run an infinite number of times without failure, which requires an sophisticated dance between the test code and the application under test. The test automation must allow for variances in the application’s timing. Ideal automation isn’t flaky.

Ideal test automation requires zero maintenance. Ideal automation doesn’t break when the new build of the application has a button in a slightly different location, changed color, or changed the implementation under the hood — it should keep executing the test case, dealing with ambiguity just like a human would. Ideal automation also deals with changes in the flow or protocol of the application. The ideal test automation would notice these changes, but keep marching to make sure the new design and implementation continues to meet specification without breaking.

The goal of automation is to increase efficiency. Efficiency in terms of cost, and time and complexity. Ideal automation frameworks, infrastructure and tests require the minimal amount of work and compute and configuration to setup and execute. The pass/fail results of ideal automation are delivered in an easy to use, relevant and timely way to the people that need to know — it makes sure the human interpretation of results is relevant efficient.

Even if test automation succeeded on all fronts of stability, maintainability and efficiency, they still aren’t the ideal test case. The ideal test case has a few more attributes.

The Ideal Test Case

The ideal test case has all the attributes of the ideal manual and automated test cases, with some superpowers: it can run on any platform and any app, framework-independent, any one can create them quickly, and most of them already exist somewhere so testers don’t have to recreate them. Yes, ideal tests exist in a giant global test case brain that has already tested tens of thousands of similar apps.

The ideal test case runs on any app or any platform. The ideal test case doesn’t need to be re-written for each platform (web, mobile, etc.). The needs of the user, the business and often even the application code are platform-independent — the ideal test is platform-independent too. If you think about it, odds are the test case you are writing right now was written for another app already — if only you knew about it, and could re-use that test case. Every app has login tests, and tests to search for generic products, add products to the shopping cart, etc. The ideal test case would be discoverable, and re-usable on every app so humans could stop wasting time rebuilding test cases from scratch for every new app platform and test.

The ideal test case is framework independent. Non-ideal test cases are one-off sentences or lists in a spreadsheet, or trapped in the schema of a particular test case management system or XML/JSON doc. Not only does this make tests less re-usable, but they have an external dependency that costs time and/or money. If the framework or API’s change, or you want to change test case management systems, this can be expensive, inefficient and painful. The ideal test case would be defined in a portable, independent format.

The ideal test case requires no knowledge of programming languages, complex syntax or need of a complex tool. A test is ideally created by simply pointing and clicking a visualization of each important step and validation. Or an ideal test is automatically created based on records of real-world user interaction. The actual setup and execution of these test flows isn’t hardcoded — the execution engine figures out dynamically how best to execute the test case. Ideally no code, complex language or tool is required to define, execute and report test results.

The ideal test case is the one you don’t have to write or execute — it is just there and knows which apps it applies to. Ideal test cases are written independently of the platform and application so they can be shared and re-used. For an e-commerce application, the ideal test suite would be somewhere in the cloud just waiting for your application and it knows how to test your login, search and cart functionality. The ideal test case is one that testers and developers don’t have to think about, you just point your application to the reusable test cases, they explore your application, determine which tests are applicable and automatically execute and report test results. The ideal test is one that is not actually written by the team at all.

An almost magical attribute of the ideal test case is that the test execution can be benchmarked against other apps. If the same tests are executed against every other e-commerce application, the team will know if 90% pass is good or bad. The team will know if 2.5 seconds for a facebook login flow is normal and expected. The team will know if there are tests/features that are normally expected to pass for similar apps. The ideal test case isn’t pass/fail, it is pass/fail with a global context to understand how bad a failure, or good a pass actually is.

The Ideal Test Team

Much like ideal test cases, the ideal test team builds towards these ideal test cases, draws on experience across many past projects and aspires to test every app on the planet. These days, I have the privilege to work with just such a team.

Jason Arbon, CEO test.ai

Previous
Previous

Testers Don’t Test Anymore

Next
Next

Test Autonomy Levels