Archive for August, 2009


I came across an article here about the new Olinda digital radio: This is currently in the prototype stage and has been developed by BBC Audio and Music in conjunction with a 3rd party company. In essence the basic unit is a DAB radio but It includes innovative features like modularity and social networking in a physical device. On the side is a studded, magnetic connector for plugging in expansion modules. This is an open, standardised hardware API – with defined connections and defined protocols for the data. It’s a bit like the expansion port on an iPod.  In the picture below, the middle unit has Six lights which show when a close friend is listening to the radio, using wifi and Radio Pop, the BBC’s website for sharing ‘now playing’ information. Each light is a button: you can tune in to listen along with them, discovering new stations via your social network.

One additional module is a tear-off player module for kids. It records all their favourite childrens’ radio progamming and then unplugs to become a standalone MP3 player.

Olinda Digital Radio

Olinda Digital Radio - How cool is that?

Olinda Kids Tear-off

Olinda Kids Tear-off

I wouldn’t mind one of these when it becomes available, I must say.

Although I’ve had this post drafted for quite a while, recently I have been involved in an actual situation which brings home quite graphically what I am about to talk about.

Recently I was asked to write regression tests for one of our applications.  The app itself is a fairly new in-house development and went ‘live’ about 2 and half hears ago, so doesn’t fall into the ‘old-legacy-nightmare’ category.  I say ‘live’ because it has never been used quite as extensively as it should be. Given that it supports – or should do  - quite an important part of the company it seemed odd it wasn’t being used as much as it should be. Up until I got involved, I’d heard various people talk about this and wondered why there were usage problems, considering alot of time had been spent building it. I did hear some anecdotal evidence that it was rather buggy and problematic but there wasn’t anything concerete to back this up.   As is the way with these things, the people involved with building it in the first place have also long since departed.  I’ll add that the application and much of the business logic around it isn’t intuitive either – my organisation tends to thrive on labyrinthine rules amd terminology sometimes.

It didn’t take me long to discover that the application is rather buggy and problematic. There are number of areas where the business rules are not fully understood (either by IT or the users in some cases), links to other systems are difficult to untangle in terms of data.  There are areas of the application that have ‘never been used – i’m not sure why’ according to the users. Some parts won’t ever be used and should be dropped entirely, some areas where the business itself has altered and where some housekeeping is needed to alter it. There are some terminology and usability issues (one quite key page reports status information as yellow text on a grey background).  Generally there are quite a number of bugs and ideosyncracies including some weird bugs that only seem to occur on UAT, with no reports of the same thing happening on Live. Despite it all, though, there are relatively few runtime errors and it seems fairly stable. It also has the basis of a very good application: It certainly isn’t all doom and gloom by any means.

Oh, I should add that part of it will be redesigned, but the redesign can’t go live until my regression tests are done. Nice.

I’d like to focus on a few important points here: The first is that the application isn’t fully understood by IT or the users and to some extent the users are looking to IT to educate them.  The second is that it’s fairly new – so it ought to be a shining example of a modern forward-looking application but instead seems to have been poorly implemented, third, the situation illustrates why the definition of ‘done’ or ‘delivering value’ isn’t whether it is running on the live hardware:  It is whether people actually are using it properly.  To put something live and then walk away is to do a massive disservice. Applications (and most importantly, the people using them) – need a certain amount of nurturing and support especially early on.  This pays dividends later, believe me.   Oh, and the fourth issue is how the heck do you write regression tests for an application along the lines of what I have just described? …

In a future post I’ll explain a bit more about how i’ve gone about it, but in essence what I did initially is to create a Use Case in the Alistair Cockburn style for the entire application.  This was in my own style with 4 vertical swimlanes:

  • name of the page and how you navigated to it including an extract from an Enterprise Architect model I created of the overall page flow;
  • an ID assigned to the page or key function on the page: I’m still dithering with this – I like the idea of it as it makes support more straightforward and the users can simply refer to ‘page 1.2.1′ as opposed to ‘that page you get to by clicking edit on the manage users page that has a red section at the top’;
  • the use case itself – which merely describes the ‘happy’ path but written in a higher-level style than any subsequent tests would need to be This should give just enough information to get you up and running using the application and most importantly in a language users feel happy with;
  • Screen grabs of the page itself, plus notes about business rules, pre-existing bugs or any other issues and (hopefully) statements of where value is created or prevented.

The point about doing this is twofold:

Firstly, now, there is a common description of what the application is supposed to do and I have explained to the users that this exists as a resource for them to refer to equally to IT;  also that we should aim to refer to it and build on it continuously.

The second point is critical:  I don’t see how you can expect people to write tests if they haven’t been through this process:  If you dive straight into the tests, you run the risk of immersing yourself into too much low level detail and can’t-see-the-wood-for-the-trees.

This brings me on to the main points about UTBTDD which simply means “understanding the bloody thing – driven development”.  The principle is that you can’t expect people to be productive – either within or outsode IT unless there is some common understanding, and recognise that development projects are collaborations.  This needs to be developed and improved on continuously irrespoctive of what projects you happen to have on the go at the time:

#1: The end-user requires support from IT not the other way around:

I have written previously in these pages about the way I get slightly concerned when the end-user is regarded as a font-of-all-knowledge super-human with all the answers and whose role in life to be at IT’s beck and call.  It’s important to recognise that (a) they may just be doing a job they have inherited and picked up, (b) might not know anything anyway and (c) they actually have job to do themselves.  It is IT’s role to support them in their job, not to act as answering service to make up for IT’s deficiencies. If a year ago we had enough knowledge to build a system and it got signed off and went live, why don’t we have enough today to answer questions on it? It is now in it’s most important phase: actually being used. That is where we need to focus.  In many of the situations I find myself in, I am perceptive enough to realise that what the users are often thinking (though not openly saying) is “you tell me!  you’re the IT department…”  or “you built the application – you should know – we’ve been through all this before…” More to the point I agree with them.  We should be in a position where the ‘current’ state or ‘as is’ position should be understood by IT well enough for us to respond swiftly, and preferably to actively educate and support the users. There will always be exceptions of course, but our focus should be on the proposed improvements and new requirements and looking ahead to the future.

This doesn’t in any sense mean we should neglect the current situation – far from it.  Whilst we have to look ahead, the ‘here and now’ is obviously vital.

#2: Custodianship of applications is critical:

Don’t just build the thing and walk away.  You will need to support it and the people using it deserve the same level of engagement from us as we expected from them during the period when it was built.

#3: Writing tests and understanding the system are totally different things:

If you asked an examiner to write geography examination questions you wouldn’t expect him to go away and immediately start learning and understanding geography.  You would expect him to already know the subject and to focus on the job at hand – writing the questions. More importantly you would expect him to write the right questions, and that is where the effort lies.  It is the same for us:  We need to understand the fundamentals before you think about writing tests so we write the right things not just anything.  How else do you know you are testing for the right things? How do you know you will be getting insight from them? Especially since what you are testing might have been in place for many years and isn’t obvious.

#4: Automation is not the most important thing:

The important thing isn’t automation. It is ensuring you have knowledge to do what you need to do. Automation doesn’t improve your knowledge. It is also important to remind ourselves that automation is just simulating what a human would do, but even then, only a certain type of human. Get a real human in (maybe even your parents) and they will often point out things your tests never thought of. Personally, I have always had something of a distrust of automation because I think it can make people complacent. I also think people run the risk of focussing on automation over the point behind what they are doing in the first place. I’ve certainly seen evidence of this.  On the other hand, though, we are all human beings and we’re brought up to some extent with the idea of that anything ‘automatic’, ‘robotic’ or otherwise not needing human intervention is always a good thing.  Fine, but just don’t overstate what you’re achieving by using it.

#5: Change is not the problem:

People get concerned and anxious about ‘change’ within an organisation, but often I don’t think this is the problem.  Understanding what you have today is the problem.  Once you have that sorted out and understood and explainable the ‘change’ is actually the easy bit.

#6: Blessed are the poor (support team) for they shall inherit the application:

The agilists that believe in ‘travelling light’ usually go rather silent when I ask them what the support teams will be working from once the application is live.  They presumably are expected to support their application based on knowledge received telepathically.  As I mentioned above, the key thing isn’t getting the application live and walking away, but making sure people are properly supported and responded to once it is live.