Archive for January, 2009


Someone pointed me in the direction of this recent presentation, which gives some very interesting views and insights into how we deal with software defects, it also gives some surprising metrics on the effectiveness of different types of testing, it’s also quite amusing.

http://www.infoq.com/presentations/francl-testing-overrated

 

 

I have recently been asked to do some work with the Cucumber framework.   To cut a long story short, this allows you to write automated tests against a User Story (Feature) and it’s associated scenarios.   To get it working, you need to set up Ruby, Watir and the Cucumber framework.  I don’t (yet) want to get too bogged down with the set-up details – mostly because it detracts from the real issues I want to talk about. A description of the set up is here: http://github.com/aslakhellesoy/cucumber/wikis and here: http://wtr.rubyforge.org/rdoc/ although there are many other resources starting to appear (I’ll add links in the future).  

Once you have the thing set up and working, and written your Features and Scenarios, the cucumber command-line interface allows you to execute the feature and will give you a listing of which Scenarios have yet to be implemented, which ones pass and which ones fail.   The behind-the-scenes implementation is written in Ruby, and simply allows you to map your scenario to the actual implemented system, or whatever is under construction at that point in time.  Those scenarios that are implemented will execute automatically in a browser, and will simulate and reproduce whatever actions you have defined: This starts with the obvious things:  entering values into text boxes and and clicking buttons through to interacting with Javascript and so on.  There are add-ins to work with Firefox, and even to allow you to interact with legacy/terminal applications that might have a telnet-esque client.  All good stuff.   

At this point I was going to start presenting some examples and actual code, and I probably still will in the future. For now, though, I wanted to concentrate on some general thoughts I have about this approach. Mainly because my involvement is still in it’s early stages and I don’t have a huge amount to present yet.  For another, once you get over the initial euphoria, some nagging concerns emerge about it.  Some recent converations with friends and colleagues would indicate i’m not alone in this.

I was recently chatting to a couple of people from other companies.  One I used to work with years ago when we were both developers:  He now works as a development manager.  The other I have never worked with, but he works for a company who – believe it or not – only does testing.   I thought that if this wasn’t a good group to get some views from I didn’t know what was.

Let me be clear about one thing before I get started:  I am a systems analyst and not a tester.  This might be why I put a slightly different perspective on things. Also, It is I said, quite early days – not just in my use of cucumber but for cucumber itself.  So one ought to give things time, perhaps.

After having a slightly more drawn out converation on this than I had bargained for, we ended up with the following themes:

  • “How do you know your User Stories are correct?”
  • “How do you know your Scenarios are correct?”
  • “Do you have any notion of ‘completeness’ ?   Are you certain that there is not hidden functionality or features that the users are unaware of and didn’t raise?  Is there not a danger of new features coming to light later, simply because you didn’t unearth them?”
  • “Why not get an end-user to do it? They might identify new things?”
  • “Does the test-driven approach give you any additional insight into how your application works?”
  • “How long are people going to spend writing the Ruby implementation?  Is that going to result in a better understanding of the system?”
  • “Is there a danger of people getting emmersed in the implementation of the tests themselves and not thinking about their actual meaning, and how much knowledge they have of what they are testing?”

These points perhaps go a little beyond testing but nonetheless I was interested they came up as they have been going through my mind for many months. 

To me, there are two groups of issues here:   The first is the correctness, completeness (or otherwise) of the User Stories and Scenarios themselves.  The second is, to put it bluntly, whether people get so engrossed in the implementation and execution of the tests that they loose track of why they are doing them, what the application is about, and whether they even have sufficient knowledge to begin with.  The analogy I used in the chat – which i’ve used before actually – is that it is rather like writing exam questions for a subject you don’t know very well.   Can you do this, and produce exam questions that are ‘good exam questions’? The answer, worryingly, is yes you can (and I’m pretty sure I’ve read about people doing exactly that). They are ‘good questions’ but you haven’t covered everything, and not covering all the subject runs the risk of creating an air of false success:  As Alexander Pope put it: ‘A little knowledge is a dangerous thing’.

Another more serious danger is that you create a culture where people are worried about looking into things too closely or uncovering new information because they are concerned about undoing or breaking tests/implementation code that has been previously written.   Slightly like avoiding pulling the refrigerator out because you’re worried about what you’re find and what work you will create.

I don’t (in this article at least) want to get too engrossed in a debate about User Stories versus Use Cases: But Use Cases do have one advantage if they are done well, in that they do make an attempt at addressing the ‘completeness’ issue, and can tease out some of the less obvious functionality the user didn’t ask for.   I sometimes come up against a slightly scary notion along the lines of ‘if the users didnt ask for it we shouldn’t do it’:  This scares the hell out me to be honest: On most of the applications I’ve worked on the users didn’t request some pretty obvious and fundamental things, so it’s not a principle I personally share. Nonetheless it does crystallise a further potential problem:  If User Stories are as Mike Cohn states, i.e. simply ‘placeholders for a conversation’,  to go from them into scenarios and then into executable tests, seems to me on many projects to be a very simplistic view. In the case of some projects i’ve worked on it borders on fantasy. The problem with Use Cases is of course that there is plenty of opportunity to do them badly – not least because the Use Case descriptions aren’t defined in the UML. The solution is to do them well! But the fact is that none of the artefacts we work with (and User Stories are no exception) are handed down by god:  They can all be wrong or incomplete - we just don’t know it at that point.  If like me you are suspicious of taking a ‘user driven’ view (on the basis that very often the users don’t actually know as much as we’re led to believe: they are just trying to make the best of the systems they got given when they joined), it simply isn’t good enough to take User Stories and drive out tests from that without a layer of systems analysis on top. IT development isn’t a car dealership: The Customer isn’t ‘always right’ – Sorry and all that.  As I said in a previous post here, if you think you can get all the detail you need from a set of User Stories, good luck: I personally think you need more: Not to replace, but to augment and provide a clear statement (to those that need to know – and that might not be the developers [unless they are interested]) of how you got to where you are. No amount of automation can surpass that.

But It’s early days of course…