So I've had the fortune and misfortune of having a CEO who was also a programmer.
To most programmers, I can imagine that the notion at first seems attractive- but very quickly questions tend to emerge..
The most obvious being- "The ability to write code doesn't mean they're going to be a good manager".
Writing code, and managing people are vastly different, let alone running whole businesses.
One very large employer who had a chairman and former CEO who was a programmer had some very self destructive behaviours, namely death marching and demonstrations of technical expertise over collaboration and listening skills. The problem is, this also encouraged others to follow suit, even "one up" each other to demonstrate they were members of the club. The only benefit, is that the CEO seemed to spend more time fighting with his own management which resulted in more softened versions of their edicts reaching the company. Fortunate, but the damage still occurred and culturally took years to undo- and didn't really start until his role changed.
Another- "As a programmer, I have a weird thing about control- I want to micro manage and change anything in my codebase- if I ran a company? jeez" this one manifested this morning for me.
I paid for services from a hosting company. Provisioning went well- all looked well. When I tried to use the services- they fell over and disappeared entirely. I waited a little while- then realised it wasn't going to resolve itself. After raising a ticket- I was informed that the CEO was in hospital having surgery, and as he was also the main programmer it was going to be 24-48 hours before the issue was resolved.
This wasn't a company that marketed themselves as a baby startup, so it was a bit of a rude shock to realise that the services you paid for, aren't coming and it all depends on one guy, who also runs the company.. sheesh!
I was educated- very early in my career by an uncle who was a HR manager- - he drummed it into my head- "if you cant be replaced, you cant be promoted"- the loophole seems to be, is being promoted doesn't seem to preclude you from being able to be indispensable- and above seems to be the end result.
Testing, Thought, and Observations
Dean Cornish
Friday, November 16, 2012
Thursday, June 2, 2011
On "The Testing Pyramid"...
I believe James and I are in agreement on a great many things.
I don't agree however with the pyramid. It oversimplifies a complex problem of how many tests you need to reach a point of feeling satisfied about your test coverage.
By placing arbitrary numbers and percentages, this model could be used to extrapolate numbers of tests based on others in the distribution- eg. having 700 unit tests could be used to determine we need n number of acceptance tests. James may not intend it to be taken literally- the problem is people often do.
As I see things, acceptance tests, integration tests and unit tests all serve completely different purposes.You cant put 100 unit tests, 5 integration tests together and say a feature works.
I understand the point of view of diminishing returns. Having a suite of 1000+ acceptance tests is a lot of work to maintain.
I firmly believe that acceptance tests shouldn't just be browser tests, they may also be service level tests, interrogating functionality inside services- then spend only the minimum effort at browser level making sure a subset of results are rendered ok.
The risk (and it needs to be considered) is that the further away from user experience you go, the greater the void of things that may go wrong.
The diminishing returns point I'm sure reaches a point where some might ask- why automate? why not just do manual testing? the thing is that's where we started- and that's how we got to where we are now, do you really want to go back there?
Manual testing is not the answer, using automation more sensibly is.
I don't share James' view about limiting acceptance tests to a small number.
I believe it should be a larger number, well architected and broken up.
I argue that acceptance tests should be relevant to the release you are working on presently. Tests for features that aren't in a state of churn should be relegated to overnight runs. Not out of sight, not out of mind, just not a crippling infrastructure challenge the team must bare daily. Move them further down the pipeline wherever possible.
I am firmly against "kitchen sink" tests- where there are a handful of them but they do everything. Its a false economy- even if they are quick for developers to run. One purpose, one test.
Epic level stories should be where we create our acceptance tests.
Developers as far as I'm concerned can break epics up however they choose to make them manageable and implementable.
The feature isn't complete until they're all implemented and passing- and that means architecting your solution (dev code + test code) with that in mind.
What is important is that when the epic is complete the tests become the documentation of the system. If they're organised by feature, its very clear. If the feature is removed, so are the tests.
We shouldn't be afraid to delete tests if we're changing a feature. If your tooling doesn't support this, then create better tooling or better ways of solving this problem. Don't burn the house down because you had a bad experience once..
I believe we (dev, test and ba) should be brainstorming together at epic level, all the things we'd want to do to test a given feature.
We should be vigorously debating the value of tests. If they add no value, don't do them. "Hunch" tests should be scheduled for exploratory testing. If the consequence of a test is not of a concern to business- then let that also be their call to eat that risk. It's a touchy one, as they tend to side with testers and take our fears seriously.
Perhaps as testers we should be being more pragmatic about how we use that trust- eg. "yes, that could happen, but do you really care? is your brand at risk over that?", never heard a fellow tester say that- and I certainly never have. Perhaps we should be treating each added test as a commitment- as a team, not to be incremented frivolously with an understanding of cost.
We should negotiate where (unit, integration, acceptance, end to end) it is best to be automated.
Where we acknowledge there is value to the left and to the right, but understand that the left are faster and easier to build and maintain.
Where we also value that the information provided from the right matters to our customer and is rich in information, and the information from the left matters to our developers and their ability to be efficient.
I argue both are important and neither should automatically 'win' based on the loudness of the yeller. I'm ok also with compromise, I've had to do a lot of it as a consultant as long as they (the compromises) are discussed and agreed to and reflected on from time to time. ;)
Subscribe to:
Posts (Atom)