Archive for March, 2009


Because I am an analyst, and because, unlike many people in this industry, I actually was an end-user sitting on the other side of the ‘UAT divide’ at one point early in my career (forcing every IT professional to do a period in non-IT roles where you actually have to use the systems being built is something I would strongly recommend:  you will certainly learn more from this than from many training courses),  the things that interest me more and more these days is how knowledge of the processes describing how the organisation operates is recorded and managed.  By that I don’t mean how you conduct the analysis itself (though that can be challenge enough):  What interests me is almost more important: what you do with it afterwards. It always makes me smile slightly when a person in IT says they are going to visit people in ‘the business’ (a horrendous term – although ‘customer’ isn’t much better) to find out the detail of how something is operating:  They talk about it as if they are on a pilgrimage to request elightenment from the Deli Lama.

The reality is, of course, that the users may not know anything.  Trust me, I was one of them. In IT, we’re conditioned to view the users as mystical deities with the definitive answers. Few people seem to realise that often the users may just be in a role by accident and they have to make the best of the systems they have been given – horrendous as they might be.

There is an important point behind all of this.  Recently I met some users to go through possible future enhancements to their system.  Although the meeting was productive, at the end of it I asked them out of curiosity how they learnt how to use the system in the first place.  The answer was that it was a combination of word of mouth, gleaning nuggets of information from other people in the office and ‘just picking it up’. Basically they are just muddling through – though It’s not their fault of course.

Not very impressive, is it, given that we expect these people to be the fonts of all knowledge; and if the system in question had been more  complex than it was, and their enhancements more demanding, it would have raised serious alarm bells with me.

Personally, i’ve never been a great one for ‘just picking it up’.  Picking what up exactly?  Bad habits and erroneous procedures that the organisation spends years later undoing?  The fact that many IT initiatives and indeed entire projects can be blighted by a lack of knowledge is in my view one of the most inportant issues the enterprise has to deal with.  In my job, I see developers struggling almost every day with issues not to do with code quality or the deployment, but with the basic understanding of the fundamentals of the system they are supposed to be working on.  As I said previously here, one of the things that concerns me about automated testing is that it doesn’t contribute anything to one’s understanding and knowledge of the system. If the team are struggling with the basics, no amount of coding or automation is going to help you.  It can on the contrary, give people a sense of ‘false success’ where they appear to be making greater progress than they are:  But how exactly were the stories and and tests arrived at?  Are they right?  Are they complete?  (Oh, and I should add that it takes a long time to do).

Go on a pilgrimage to ‘the business’ and you often find the same basic thing – people struggling with the understanding and trying to aquire ‘the knowledge’.  

Another very interesting issue that sometimes comes to light when talking to the business users is that often they expect the IT department to already know the answer to the questions they are asking: they are implying – albeit subtly – that they expect the IT department to already have the knowledge. In addition, and perhaps more importantly, they think this is the way it should be.   

There are important issues for agile in this.  When people ask me about what I think about agile, my view is always the same:  I agree with most of it.  The percentage tends to hover around the 70-80 percent mark. The remaining 20 to 30 percent consists of miscellaneous concerns chief of which is around the area of knowledge management and ensuring that people (in the business and in IT) have the fundamental knowledge and understanding to even contemplate starting on the piece of work in the first place (or even having an informed chat about it).  Also, will the knowledge that has been gained during the piece of work be propagated into the future, so that when a follow-up piece of work takes place in a years time, we are not continually asking the same questions?   This kind of thing doesn’t seem very important to many agile proponents.  To me, though, it is critical:   There must be importance attached to managing knowledge of the ‘as is’ state of our systems and processes. This benefits all the projects and initialtive going on.    I hear and read little in agile about this, and worse still, sometime such things even seem to be discouraged.  Crazy.

Going back to the user (or ‘customer’ as we are now forced to refer to them):  Many users are under the impression that this is already done. They are obviously surprised (or perhaps annoyed) that we are continually asking them questions about things we ‘already know’. After all, we built the system in the first place, didn’t we?  They view IT (rightly) as guardians of this kind of knowledge and as custodians of the systems themselves. And I agree:  I think that is exactly how an IT operation should be, and this is why I have a bit of an issue with the ‘customer’ definition – I guess it’s fine if you know the specific context but it never seems to convey the partnership or collaborative nature of development and as such it sits a little uncomfortably with me.

So what are my conclusions?  Well quite succinctly, I think ensuring ‘the knowledge’ is available within the team (and outside in the wider IT and business community), and that this is maintained properly irrespective of what projects happen to be going on at that point, is a critical objective for the enterprise. It’s not just an IT issue, far from it, But for IT Is at least as important as acceptance test driven development, auto-deployment, and even issues around code quality and refactoring. In alot of instances it is more important. How can it not be?    But it also requires the same level of time and investment. I am not sure it is getting the attention it deserves, though.  A shame, or perhaps an irony, since it is so obvious.

About 10 years ago I was working on a project that was something of a nightmare challenge and fitted many of the criteria of what I have later described as ‘the existing system syndrome’.  That is to say, it was a replacement, pretty much-like-for-like actually, of an existing application. These projects are almost always in my experience the most difficult and problematic.  Many people are surprised by this statement. Replacing an existing system should be easy, surely?  You already have the knowledge?  You have the existing system as a basis?   The business area using it is probably mature and knowledgible? 

Well, alot of this adds to the problem, bizzarly.  For one thing, the expectation on the part of the customer will be that it is ‘just an upgrade’ or a ‘conversion’ and therefore they won’t want to fund it properly.  For another, the simple fact that you are taking something that people already know (and sometimes love) and replacing it in it’s entirety means you have on your hands a potentially complex reverse engineering exercise (or at least we did), but it also taps into alot of emotional baggage that end-users have built up.  ’Oh the old system was so much better’ some histrionic soul emailed me (because a specific keyboard shortcut had changed if I remember correctly). It’s like trying to recreating the recipie for the favourite lasagne that someone likes.  You’ll never get away with it lightly.

There were a bunch of issues we faced with this project.  It was poorly estimated for one thing (not by me, I hasten to add, but probably as result of the mind set I have just mentioned). Yes, we did have some of the source code for the system we were replacing but this was incomplete in parts and in any event was presented to us as being somewhat ‘unreliable’ in terms of the version.  We had a compiled executable and installer (which may or may not have been built from that source code), and a set of documentation from about 6 years earlier describing the calculations the software had to perform.  It did, however, have to behave exactly as before.

This scenario throws up a whole load of discussions, but needless to say the project did run into problems – unsurprisingly – and I set about thinking of ways in which we could restructure the project to help deal with them.   It seemed obvious we would run into very serious issues if we didn’t, and there needed a way of ‘blowing the whistle’ if the right things didn’t start happening pretty quickly.

Over a combination of lunch and a glass of Leffe blanc after work with a colleague, the solution became clear. To cut a long story short, we would dissect all the existing tasks and requirements and structure them into logical groups based on how well they fitted together, their generic size and so on.  We would then agree priorities and allocate resources to them.  Crucially, the project would be structured into weekly development cycles, where we would do development basically from Tuesday to Thursday.  Friday would be set aside for testing, verification and deployment; Monday would be set aside for responding to customer feedback in the morning, planning in the afternoon, and then a view would be taken on how well we thought things were going over all.  If there were serious concerns about progress we would ‘blow the whistle’, escalate the concerns to more senior levels and work out a way forward at the earliest opportunity.  This might involve temporarily suspending the project. 

We called this approach…  well, nothing, really.  It just seemed a more sensible and logical way of structuring and getting on with the work. 

At this point in the article I was going to run a ‘what happened next’ competition to see if you could guess what happened when I presented the idea, but I’ll save you the trouble.

It got turned down. I think the reason I was given was that ’it would be seen as an admission of failure’ to go down this kind of route.  So we continued as before, and the system did eventually get delivered:  it didn’t turn out to be the horrendous abboration it could have been, on the contrary, in fact.   Towards the end I even got paid for some of the extra overtime, and managed to get home at a reasonable hour.

I promise you I’m not complaining.  Projects like this are what they call ‘character building’ and do give you an interesting insight into many things you won’t get from any formal training.  The fact that I came up with the new (to our company anyway) approach to project structure in about 2 hours I also find quite interesting when I look back and I’m rather proud of.   Nowadays of course, ‘agile’ is an industry, with dozens of books, conferences, training, communities and so on.  It makes you think. To me, though, most of the ideas in agile (though not all of them, I can assure you) are just common sense and there isn’t that much to diagree with.   So perhaps ‘doing the work’ is now the way forward.