I believe James and I are in agreement on a great many things.
I don't agree however with the pyramid. It oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage.
By placing arbitrary numbers and percentages, this model could be used to extrapolate numbers of tests based on others in the distribution- eg. having 700 unit tests could be used to determine we need n number of acceptance tests. James may not intend it to be taken literally- the problem is people often do.
As I see things, acceptance tests, integration tests and unit tests all serve completely different purposes.You cant put 100 unit tests, 5 integration tests together and say a feature works.
I understand the point of view of diminishing returns. Having a suite of 1000+ acceptance tests is a lot of work to maintain.
I firmly believe that acceptance tests shouldn't just be browser tests, they may also be service level tests, interrogating functionality inside services- then spend only the minimum effort at browser level making sure a subset of results are rendered ok.
The risk (and it needs to be considered) is that the further away from user experience you go, the greater the void of things that may go wrong.
The diminishing returns point I'm sure reaches a point where some might ask- why automate? why not just do manual testing? the thing is that's where we started- and that's how we got to where we are now, do you really want to go back there?
Manual testing is not the answer, using automation more sensibly is.
I don't share James' view about limiting acceptance tests to a small number.
I believe it should be a larger number, well architected and broken up.
I argue that acceptance tests should be relevant to the release you are working on presently. Tests for features that aren't in a state of churn should be relegated to overnight runs. Not out of sight, not out of mind, just not a crippling infrastructure challenge the team must bare daily. Move them further down the pipeline wherever possible.
I am firmly against "kitchen sink" tests- where there are a handful of them but they do everything. Its a false economy- even if they are quick for developers to run. One purpose, one test.
Epic level stories should be where we create our acceptance tests.
Developers as far as I'm concerned can break epics up however they choose to make them manageable and implementable.
The feature isn't complete until they're all implemented and passing- and that means architecting your solution (dev code + test code) with that in mind.
What is important is that when the epic is complete the tests become the documentation of the system. If they're organised by feature, its very clear. If the feature is removed, so are the tests.
We shouldn't be afraid to delete tests if we're changing a feature. If your tooling doesn't support this, then create better tooling or better ways of solving this problem. Don't burn the house down because you had a bad experience once..
I believe we (dev, test and ba) should be brainstorming together at epic level, all the things we'd want to do to test a given feature.
We should be vigorously debating the value of tests. If they add no value, don't do them. "Hunch" tests should be scheduled for exploratory testing. If the consequence of a test is not of a concern to business- then let that also be their call to eat that risk. It's a touchy one, as they tend to side with testers and take our fears seriously.
Perhaps as testers we should be being more pragmatic about how we use that trust- eg. "yes, that could happen, but do you really care? is your brand at risk over that?", never heard a fellow tester say that- and I certainly never have. Perhaps we should be treating each added test as a commitment- as a team, not to be incremented frivolously with an understanding of cost.
We should negotiate where (unit, integration, acceptance, end to end) it is best to be automated.
Where we acknowledge there is value to the left and to the right, but understand that the left are faster and easier to build and maintain.
Where we also value that the information provided from the right matters to our customer and is rich in information, and the information from the left matters to our developers and their ability to be efficient.
I argue both are important and neither should automatically 'win' based on the loudness of the yeller. I'm ok also with compromise, I've had to do a lot of it as a consultant as long as they (the compromises) are discussed and agreed to and reflected on from time to time. ;)