Archive for January, 2012


I hear a lot of talk about ‘on time and on budget’ and it is often held up as the pinnacle of human achievement. And perhaps sometimes it is. But as with so much in life, there is often more to it.

But when people talk about the virtues of these two things, I get a bit niggled by it.  After all, what about the work itself?  And more importantly the implications of the work itself?  It that less important?  This may come as a shock to some, but there are cases where ‘on time and on budget’ may matter less compared to doing the right thing.  The right thing is what people remember, and is why we are here as IT professionals, surely? This doesn’t of course mean money is no object and it is open season to do whatever we want. It’s simply that different projects and situations sometimes need different treatments.

Remember this?:

Project Management Triangle

As we all know, ‘time’ and ‘budget’ (or cost) are only two points of the project management triangle: ‘scope’ (or maybe to put it differently  ’doing the right thing’) being the third.  Furthermore, it is an equilateral triangle: ‘time’, ‘budget’ and ‘scope’ matter equally.  

These three things are constantly competing with eachother, but at the same time you can’t have all three. That is the dilemma. The outcome may well be different on each piece of work. And this isn’t an indication of insitutional failure or acts of wrongdoing. It’s just life. There is no need to philosophise further.

Unfortunately, take away ‘time’ and ‘budget’ and the what you are left with is probably the most difficult part to explain and understand.   Is that why it conveniently gets left out?

I am sick to death of the stories circulating about London’s high-tech district being in the ‘East End’. It bloody well isn’t. Most of it seems to centre around Old Street and City road. Well when I worked round there, we never regarded it as ‘East’. If anything it is ‘North of the City’ or even ‘The City’ (depending on the message the sales people wanted to give out). City Road where I was, is EC1. Not the east end. The Barbican – a few minutes away – can in no way be described as ‘the east end’. The Bank of England is in EC2 (geographically further east than Old Street) but no one would describe the Bank of England as being in ‘the east end’.

Oh, and Hoxton, a few seconds away once you cross over City Road into Hackney, is N1! N1 is not ‘the East End’ either.

I take it no mapping companies are based round there..

How often have you been involved in a set of functionality, sorry, ‘features’, or maybe even an entire project, that has been threatened with abandonment after being underway for some time?  In my case, It doesn’t happen often, thankfully. If anything it’s very rare. But a while back I did get into this situation. I had analysed a solution, got it approved, estimated and funded, and it then got handed on to someone else who questioned the whole thing. And it did look like it might get stopped entirely. 

You might perhaps suggest that the need, desire, requirements and idea didn’t have strong enough business value – hence why it was being questioned.  And there might be something in that. But business value is rarely an empirical, scientific thing.  Often it is in the eye of the beholder. Personally, I and the stakeholders were satisfied that it did have business value but that aside, I’d like to dwell on one central point:  What happens if it did get stopped?  

Think about it.

The natural answer is that because IT haven’t built the features, it all just fades away.  IT said ‘no’ and that’s that. 

But the reality is different. The people requesting it will likely just find a different way of doing it.  They might knock something together themselves in Microsoft Access or Excel or whatever else they can lay their hands on. Maybe they can go outside to people that will build it for them, cutting you and your department out of the loop.  Perhaps they will invent some manual system or other means of doing what they want to do.

When IT says ‘no’, it doesn’t mean things don’t happen.  They still go ahead, just in a rubbish way.  

We could just say ‘so what’ and leave it at that. I’ve even encountered people who are fine with this. They view the ‘do nothing’ route with a weird  sense of achievement, as if it is some kind of noble outcome. But this sits uncomfortably with me. If not doing something is a measurement of success,  why not stop using computers and technology entirely while we’re about it?  let’s all go back to filing cabinets, ring binders and rollerdecks shall we?  Maybe that has ‘business value’?   It would certainly save on electricity.  

Of course, we have to be mindful of the fact that in IT we are often spending large quantities of other people’s money. I am not suggesting for one moment that we plough on with things that simply aren’t worth the investment. But equally, my experience is that people don’t generally approach the IT operation unless they have a valid reason.  That reason might be rough round the edges and perhaps badly articulated but there is generally a kernel of something valid behind it. 

IT saying ‘no’ and assuming that’s that seems to me to be a very arrogant and short sighted way to behave. You might of course have explanations backing up your ‘no’ position, but it is important for IT people to face up to the fact that we live in a day and age where people may well just do it anyway.  

* * *

After I wrote this, I came across some articles describing a philosophy which is gaining momentum at the moment around people using their own equipment in the workplace.  In other words, instead of your company issuing you with a 5-year old laptop with a Celeron processor and Windows XP, you could choose to bring in your own machine – of your choice – and use it. There is much to debate here, most of which is off topic, but the similarity is that organisations are fighting a loosing battle trying to prevent it: Saying ‘no’ just means people will invent ever more obscure and shady ways of round the rules and doing it anyway.  

No doesn’t mean it won’t happen. Food for thought certainly.

The principle of small regular releases is a religion amongst a fair chunk of the software industry. So the fact that I am questioning it probably means people will create voodoo dolls of me and that I’ll find a severed head on the doorstep in the morning.

Or then again maybe not. I have worked on projects that have gone like clockwork up until deployment time. At that stage things started to go very wrong for various technical reasons. So regular releases on technical grounds to demonstrate that we can do it and to resolve any problems early, makes perfect sense. I don’t think anyone would disagree with how beneficial this is.

The central reasons for doing iterative development in the first place – in terms of how features and stories are managed, analysed, developed and tested – also broadly* stacks up, together with how the work is organised.

It is the idea that iterative development and frequent releases “delivers business value quickly” that I have a problem with. It’s great if it happens, but does it? I have to say that I rarely see it, and there is a big difference between delivering iteratively and people actually using what each iteration delivers.

What I generally observe is a culture where the users or customers think “I might as well just wait until the end when it is all done”.

So why is this?

Well, perhaps your project is such that there isn’t sensible way in which individual iterations could be usable to people. Is an HR system’s Employee Create screen any use if the Employee Update screen isn’t coming until three iteration’s time? maybe I’ll just wait until both are available? Maybe having people use the create feature before the update feature is available is even undesirable: it would complicate a later release and make the deployment of the update feature more risky. This doesn’t undermine the principle of iterative development – it is just a characteristic of the environment you happen to be in.

Another aspect goes back to our old friend estimation. If the estimation and budgeting process for the work has forced people into budgeting for the whole thing way in advance (a daft practice we’re stuck with which inevitably leads to people estimating and making decisions before they sensibly can and should), then the customer will inevitably think “I’ve paid for the whole thing now, so I will come back at the end when it’s all done”. If you’ve ever wondered why customers are reluctant to attend stand-ups, or engage generally there’s your explanation also.

It could be that the regular release principle isn’t a good fit for your sector. Every organisation, department and project is different, so perhaps the expectation of ‘regular releases you can use’ just isn’t appropriate. To deliver iteratively doesn’t mean people can or will use the results.

_____________
* I wouldn’t want to give the impression that everything in the world of iterative/agile is perfect because it certainly isn’t and there are many pitfalls. This Blog discusses many of them. It’s not a utopia.

Some interesting artlcles have popped up recently around ‘ICT’ (that’s ‘Information and Communication Technology’ by the way) and how the teachng of the subject in schools is ‘dull and harmful’.

One article that appeared the other day is this on the BBC web site. Apparently ICT study in England’s schools will be scrapped from September.

I would quite like to know when ‘ICT’ came about in the first place. I don’t hold out much hope for anything that originates from the coalition government, but if it means improvements to how computing, programming and technology are taught it will be a good thing. Somehow I doubt it though. As for ‘ICT’, I’d be happy to see the back of the term entirely.

In my view, agile can only really operate if we have the right level of knowledge in the team.  Further, I would argue that if you have a knowledgable team, they will find a way of delivering anyway – irrespective of whether we use agile, waterfall, or nothing at all. I’ve seen it. It works.

1. What is good?

Because agile contains many ideas and theories which appear at first glance to be innocent enough and uncontroversial, they slip by without people properly considering the implications.  One of these is the idea of agile  ”making a good team great” Beck and others set a lot of store by this statement. And it seems  reasonable enough. But what does it actually mean in practice? I would suggest that “good” means ‘knowledgable’. Knowledgable in the sense of being up to speed and confident with the business domain, implications of the requirements and so on and so forth.  In other words, the reason for doing the work in the first place.

2. It’s not about the technical side.  Sorry developers..

“Good” in my mind, absolutely doesn’t mean ‘technically competent’. We can assume you haven’t enlisted the cleaner to do your development, so it is fair to assume that the people you have are technically competent. That is not in question. What I am talking about has nothing to do with how good people are at polymorphism. It is not about knowledge of dependency injection. Not about multi view controller or anything else.  Believe it or not, your business users and the people paying for your project most likely don’t care about any of this.  COBOL 74 is just fine if it gets the job done. 

3.  It’s about what you know…

What is important is whether  we have a knowledgable team. If – and only if – we have this, we are in a position to embrace the agile ideas properly and do the things that the phrases imply – such as  ’inspect and adapt’.  We can start to be flexible and pragmatic.  Knowledge of Kanban, test automation, automated deployment, and other ‘agile’ techniques don’t really help us if we don’t understand the fundamentals of the work itself. We can be successful without any of these in fact. I am sorry if this is upsetting to some people but it is the truth.

4. What if we have ‘knowledge deficit disorder’?

Having established the principle,   the big question is: What happens if you don’t have a knowledgable team?  The agile gurus offer no guidance at all.  Obviously because they assume the situation never arises.  Yet it does arise and is a very serious and real problem in many organisations.   The problem of people not having the business knowledge required to do their work is a theme that has run through pretty much every  job I have ever held. And I am knee-deep in it once again, now.  This might be coincidence but somehow I don’t think it is.  I actually think it is intrinsic to software development and one of the most serious issues facing IT departments; the only difference between organisations being whether they are prepared to admit it or not.  Or better still, deal with it.

5. Is this all just some kind of whinge?

You might view all of this as some kind of obscure whinge. But if it is a whinge, it’s not an obscure one. It is serious enough to cause projects to be abandoned in their entirely. At most, or at best, slowing down delivery and making our profession look like a bunch of idiots.  Sometimes you get all of this rolled into one as I have witnessed on many occasions.

6. Conclusions…

So what are we to make of it all?  Well, a couple of years ago, I instigated an initiative which I have written about on these pages previously. It amounted to a knowledge management strategy. These sorts of initiatives sound more dramatic than they possibly are. For me, it grew out an aspect of business analysis I have observed repeatedly over the years and always found frustrating, if not concerning: That is how frequently pieces of work seem to entail re-constructing information and knowledge we should already have. Often at much time and expense. When this starts to happen every time, often with the same cases being revisited over and over, you form you opinion that there is some obscene waste taking place. Something is seriously wrong.

***
The slightly depressing conclusion to this is that I have met few of my peers who share this opinion.  Spending weeks on end re-constructing the past rather than focussing on the new requirements, seems fine to them.  If I were a psychologist I might perhaps find this interesting, but I’m not, and I just find it very annoying frankly.