Thursday, June 2, 2011

On "The Testing Pyramid"...

I believe James and I are in agreement on a great many things.

I don't agree however with the pyramid. It oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage.
By placing arbitrary numbers and percentages, this model could be used to extrapolate numbers of tests based on others in the distribution- eg. having 700 unit tests could be used to determine we need n number of acceptance tests. James may not intend it to be taken literally- the problem is people often do.

As I see things, acceptance tests, integration tests and unit tests all serve completely different purposes.You cant put 100 unit tests, 5 integration tests together and say a feature works.
I understand the point of view of diminishing returns. Having a suite of 1000+ acceptance tests is a lot of work to maintain.

I firmly believe that acceptance tests shouldn't just be browser tests, they may also be service level tests, interrogating functionality inside services- then spend only the minimum effort at browser level making sure a subset of results are rendered ok.
The risk (and it needs to be considered) is that the further away from user experience you go, the greater the void of things that may go wrong.

The diminishing returns point I'm sure reaches a point where some might ask- why automate? why not just do manual testing? the thing is that's where we started- and that's how we got to where we are now, do you really want to go back there?
Manual testing is not the answer, using automation more sensibly is.
I don't share James' view about limiting acceptance tests to a small number.
I believe it should be a larger number, well architected and broken up.
I argue that acceptance tests should be relevant to the release you are working on presently. Tests for features that aren't in a state of churn should be relegated to overnight runs. Not out of sight, not out of mind, just not a crippling infrastructure challenge the team must bare daily. Move them further down the pipeline wherever possible.

I am firmly against "kitchen sink" tests- where there are a handful of them but they do everything. Its a false economy- even if they are quick for developers to run. One purpose, one test.

Epic level stories should be where we create our acceptance tests.
Developers as far as I'm concerned can break epics up however they choose to make them manageable and implementable.
The feature isn't complete until they're all implemented and passing- and that means architecting your solution (dev code + test code) with that in mind.
What is important is that when the epic is complete the tests become the documentation of the system. If they're organised by feature, its very clear. If the feature is removed, so are the tests.
We shouldn't be afraid to delete tests if we're changing a feature. If your tooling doesn't support this, then create better tooling or better ways of solving this problem. Don't burn the house down because you had a bad experience once..

I believe we (dev, test and ba) should be brainstorming together at epic level, all the things we'd want to do to test a given feature.
We should be vigorously debating the value of tests. If they add no value, don't do them. "Hunch" tests should be scheduled for exploratory testing. If the consequence of a test is not of a concern to business- then let that also be their call to eat that risk. It's a touchy one, as they tend to side with testers and take our fears seriously.

Perhaps as testers we should be being more pragmatic about how we use that trust- eg. "yes, that could happen, but do you really care? is your brand at risk over that?", never heard a fellow tester say that- and I certainly never have. Perhaps we should be treating each added test as a commitment- as a team, not to be incremented frivolously with an understanding of cost.
We should negotiate where (unit, integration, acceptance, end to end) it is best to be automated.
Where we acknowledge there is value to the left and to the right, but understand that the left are faster and easier to build and maintain.
Where we also value that the information provided from the right matters to our customer and is rich in information, and the information from the left matters to our developers and their ability to be efficient.
I argue both are important and neither should automatically 'win' based on the loudness of the yeller. I'm ok also with compromise, I've had to do a lot of it as a consultant as long as they (the compromises) are discussed and agreed to and reflected on from time to time. ;)

3 comments:

Bumble Bee said...

I totally agree with your point of having as many acceptance tests possible. The mistake that we all make when it comes to testing is ; not well thought of architecture for the tests. The test architecture must be simultaneously evolved with the produt architecture and substantial amount of effort is spend in it. Nevertheless, it is the harderst part to convince the stakeholders about that investment ;)

Amer Murad said...

This make more sense than sticking with certain ratio. We no longer judge an application by the number of line codes or by line or method coverage. the testing should be driven by what the user want and also prevent bad usage for the application. Good post.

Dean Cornish said...

Here is James's reply that he made today- for some reason blogspot ate it :(

Hey Dean,

Thanks for this post. As often happens, I think we agree on most of the content but are sometimes talking cross purposes.

I am certainly not advocating exact test ratios and make this explicit in the post: "In the test pyramid visualisation, I've included percentages of the number of tests, but this is just to give an idea of rough test mix breakdown."

Each project is different based on the team, the technology and the quality requirements. In most cases, I am suggesting you start with a small number of GUI tests if you can get sufficient coverage at lower layers (eg, which you generally can in a Rails app). Keep validating your approach - is exploratory testing finding too many bugs? If so, then re-visit your approach and testing strategy. Don't start at the other extreme - writing lots of high maintenance, high effort GUI tests from the start. In rare cases, it might even make sense to use completely manual tests (eg, one person quickly developing a small and simple iPhone app).

Agree with you on epic level tests (no more story level GUI tests please!), deleting tests that don't make sense any more, and debating the value of a test. Especially, I agree that there needs to be more trust and communication between testers and developers on a team re coverage so as to avoid covering the same thing over and over at different test layers written by different people.

James