Someone stopped me at work about a month ago and said that they thought my comments about Cucumber (here) were a bit harsh.  ‘Why’, I asked?  ‘I thought they were quite mild’.  In essence they thought that what is being produced with the automated tests are fine if you just view them as smoke tests or maybe regression tests – basically as ways of proving if your system works, and it’s a bit of a mistake to assume they are providing a repository of knowledge about the system.  I think he is right.

The idea of automated testing is fine if you accept it for what it is, and remind yourself of this regularly:  That is, that all you are doing is simulating what a person would do. Nothing more.  As such it is important not to get carried away with what you are achieving by using it.  Back in January I spent much of the month writing Cucumber tests for a system I am involved in supporting, and then doing a lot of the Ruby implementation.  This system is a challenge on several fronts; the technology is quite old now and also complex.  Release and configuration is a nightmare. But there is also something rather more significant: It isn’t particularly well understood.  Or at least, wasn’t at the time.  The time I spent doing the tests and writing Ruby didn’t improve my knowledge of the system at all:  Luckily I had had some time a couple of months earlier to learn about it and to put together some other analysis artefacts.  Not to the degree I would have liked to, but it was a start.

So I guess what I am interested in these days is how you arrive at the tests in the first place.

If like me you are often involved with systems that are ‘legacy’ or maintenance challenges for various reasons, one of the biggest issues is discovering what the system is supposed to do in the first place.  This seems like a rather obvious statement but in my view is the root cause of much lost productivity both within and outside IT.  As I have stated previously in this blog, there can be an institutional issue with this:  Often the users might not know anything either and expect IT to educate them.  The knowledge problem can exist equally between IT and the users (or ‘business’). 

 

‘Developers Develop’

 

I used to hate that phrase.  Many years ago I had a boss who used it to describe the organisation of the department.  ‘It’s very simple, Mike’ he said, ‘Developers develop, Analysts analyse, Project Managers project manage’.  I took issue with this.  It struck at the heart of why I went into IT in the first place:  This was to see the fruits of the analysis and design implemented in the system, and to see how it changed the working lives of the people using it.  At the time my boss made this statement, I was a developer, but I had come from an analytical background and the work I did tended to be a mix of development and analysis. What motivated me was the end product:  Slightly like when you design some building work on your house or do decorating:  You want to see the finished article, and development is what you go through to get there.

For a long time I assumed all developers thought like this.  Why wouldn’t they? Years later of course, it became clear that many developers – probably the majority if we’re honest – don’t think like this.  They are interested in writing clever code and are motivated by the technology and art of what they are doing. Often they aren’t particularly interested in the end product or the users or, astonishingly, even the organisation itself.  My way of thinking represented a specific group: a small Genus within the overall Species of developers.

The point is that if people are struggling with the basic understanding of the system they are working on (and I don’t mean at the code level) no amount of automated testing (or any form of coding) is going to help you.  My January foray into Cucumber tests in Ruby didn’t improve my knowledge of the system but it was interesting to do some coding and it certainly brings home the importance UI design and user experience design:  get that right and your tests can be much simpler to write.

But even if you can write tests, do you have the knowledge to do so? You don’t suddenly end up with tests out of nowhere, so how do you know if they are complete?  How do you know if you have been through the right thought process?  This seems to me to be the issue with Test Driven Development. 

My concern is that automated acceptance testing and TDD in general can actually result in a rather shallow, simplified view of the world, where people are writing tests before they have the knowledge to do so.  There has to be some form of ambient ‘layer over the top’ that provides the necessary background and context people need.  Otherwise the danger is of people just ‘going through the motions’ to tick boxes (and I’ve seen this).  In an earlier article I likened it to ‘writing exam questions for a subject you don’t know particularly well’.  Can you do it?  Yes, of course you can – by this I don’t mean ‘are the questions factually correct’; I mean are you covering the right things? Are you getting the insight and understanding you need?  That’s the tricky bit.  In some ways, I’d rather forgo the automation, and address some of these fundamentals first such as:

 

  • Proper documentation of, at least, the happy paths through the system but recorded in a user-friendly way accessible to IT and business users in equal measure; Use Cases are fine for this if they done well (but as we know, there is plenty of opportunity to not do them well).  I have an article here about the issues around this;  And before you ask, no: this absolutely isn’t the same a set of User Stories. 
  • Ability to repurpose the documentation into user-facing materials: training courseware, help files and so on, that can assist in building a lasting partnership with the users;
  • An acceptance that this material needs to be maintained continually forever, and isn’t a ‘one off’ ‘project’ activity to get forgotten afterwards; A cultural acceptance that such material has to be continually maintained and improved and that people should want to do so.

If this is done wisely and with vision, it can address the ‘common language’ issue which blights many organisations; it also provides a foundation to write our tests and means we can hopefully start to talk confidently about what we are doing, even if it is new to us.  Remember that even if something is new to us it isn’t necessarily new to the organisation (the system I was doing the tests on in January has been in use since 2000: Heaven knows how many people have worked on it since, and how much time has been spent over that time try to reacquire knowledge about it).  Personally I think the benefits that can flow from this vastly outweigh the benefits you will get from automated tests, and quite possibly consume lest time.

Of course, no developers (or as I alluded to earlier, few developers) will ever suggest something like this.  The cucumber automation part, probably appeals to most of them though, because it’s, well, coding.  So in a way, perhaps my former boss was right after all.  Developers do ‘develop’, and it is up to others (people like me for example) to ensure they have as much information as possible to be productive at it. 

A final word about ‘automation’. It occurred to me the other day that the idea of automation is something that is buried deep in the human psyche – and probably something that goes back to the age of the space race and so on.  If you have ever watched an episode of Thunderbirds, you’ll notice how everything is labelled as ‘automatic’ (and ‘atomic powered’). We’re brought up with the idea that anything ‘automatic’ ‘robotic’ or in some way unattended is always a good thing. Remember also the part in Jurassic Park when John Hammond proudly showcases InGen’s advances in genetic engineering and shows his guests through the island’s vast array of automated systems. Later he blames the problems on an over-reliance on automation and a failure to respond to what humans were actually observing.

Interesting.

The ‘Mission to explain’

 

About 2 years ago I was on my feet doing a presentation when someone asked me what I thought the role of a systems analyst (or indeed any analyst for that matter) was.  I was a bit taken aback by the question.  Thinking quickly, the phrase ‘it is about ensuring that the developers have enough information to do their jobs without having to ask too many questions’ came to my lips.    Not a bad definition, actually.  In some senses it is also being able to explain things to developers, managers and the users (or the horrendous ‘customers’ as we are now obliged to call them). 

The key message in this article is that if as a team we can’t explain our systems to ourselves, let alone to our users, how are we supposed to write tests for them?  Obvious I suppose, but I just don’t get the feeling that the ‘layer on top’, ‘context’ and ‘background’ that you need before you even start is getting much attention. Many don’t even seem to acknowledge that there is a problem; strange, since I seem to observe it every week and I have written about it here.  There are massive opportunities and benefits available also: With a bit of investment and slightly less of a focus on automation as the be-all-and-end-all, we might actually achieve them.