Category: Analysis


I was talking to some IT-friends the other day about requirements gathering. This subject has long been a source of problems (sorry – challenges) for our industry, for a whole range of reasons we all know well – vagueness, change, estimation, conflict etc. But the discussion moved onto high-level ‘strategic’ or ‘transformational’ projects where you might not have any requirements at all. You might get some high level goal like “we want to build an online shop” or “we want a replacement system to support more suppliers” but these aren’t really ‘requirements’ as we have known them. And the more i talk to people the more there seems to be an increasing sense that the concept of traditional ‘Requirements’ is diminishing or often doesn’t exist at all: If there is a requirements list then it pretty much consists of:-

#1: “Make it work”
#2:
#3:

So are we witnessing the death of the ‘requirement’?

You can contrast this with earlier years where the end users would often approach technology proactively with a rough idea. In my experience this would need shaping, but generally be pretty well formed to begin with. Technology then went ahead and built or implemented it. I daresay at his point there will be some people that will throw their hands up with horror at such a simplistic view of the world, but trust me it can work. The notion of end users suggesting a project and technology responding with “Yes. I see what you mean. Lets do it” can actually result in success. It really can be that simple.

This I suppose, suggests a ‘bottom-up’ culture where people lower down the food chain feel confident enough and empowered enough to make such a suggestion. Furthermore management ensure they are trusted and supported to see it through. I’m not disputing it can have a downside of course – individual parts of the organisation doing their own thing and uncoordinated chaos as a result. “Local Optimisations” can spring up. But not always. If people talk and share, it can work perfectly well.

The alternative is where people stop making suggestions, stop attempting to change or innovate and just accept what they have. What sort of a bizarre world is that?

So what of the requirement, then?

I don’t know the answer, frankly. What I do know is that some will say it is a non-issue because the same people “further down the food chain” will know the intimate details of their jobs and processes so well they can easily bring about the necessary change. This is dangerous nonsense. Technology people need to be in there to assist the users. It’s not unreasonable for them to expect technology people to help educate them and to explain the better or ‘right’ ways of doing their jobs. Personally, this is a major driver behind why I went into this industry in the first place.

The death of requirements – if that is what it is – is not a good thing. Perhaps It is indicative of technology’s commoditisation and marginalization. If that is the case it is wrong and ultimately catastrophically bad for everyone inside and outside of our industry.

The other day an interesting debate took place at work in our regular Business Analysis meeting regarding testing, Test Driven Development and where the involvement of the Business Analyst ends and the QA/Tester’s begins.

It’s a debate that has come up many times in different guises over the years and I don’t really regard it as a problem that can be “solved” as such. Projects tend to be different from one another and people’s levels of expertise and knowldge are different. The discussion partly concerned who “owns” and completes the feature files needed for development to commence.

If you define a feature as a collection of user stories and user stories giving rise to a series of scenarios then one solution to the BA/tester divide is that the BA produces the features and User Stories (and epics if necessary which is a different topic) and the QA/Tester then augments these with the scenarios described in the automatable GIVEN-WHEN-THEN syntax. Or the BA can list out the scenarios as best as they are understood at that point and they then work together to complete them.

The latter approach is what I would favour, but in any event this is probably what has to happen simply because without wishing to sound too pompous about it, as a BA the QA/Tester only knows what you tell them. Unless you have a tester with bags of domain knowledge it simply won’t be possible to have a clear-cut hand-off from the BA to the QA/Tester. You can’t really expect them to just go off and write meaningful tests independently.

But there is a much more interesting debate to be had around what is needed in addition to the stories and scenarios to enable people to build the solution. I have encountered people over the years that think stories and scenarios are all you need. This is rubbish. User Stories are a great technique and TDD has improved things, but it’s important we don’t get carried away and drift into a fantasy world. Not every characteristic and behaviour of a system can be expressed as stories. It just isn’t possible. You need to supplement them with other techniques or you risk either being swallowed up in a miasma of over elaborate stories that will have your users and customers zoning out (one of the common problems that used to happen with Use Cases) or creating a shallow, oversimplified view of the world that just isn’t detailed enough.

Our old friend UML can help at supplementing the stories. It is unfortunate that many people still seem to insist on misinterpreting UML and believing it is a methodology, in some way at odds with the agile movement. It isn’t – it is simply a communication tool. Of course, if you feel you don’t need it and stories are all you need, then I can say is – good luck…

A bit of research via the Internet and some business analysis textbooks and you arrive at the conclusion there are five common problems encountered with requirements gathering.

So far so good. Unfortunately there is no agreement on what those ‘five’ actually are.

I have come across these:

  • Customers don’t (really) know what they want
  • Requirements change during the course of the project
  • Customers have unreasonable timelines
  • Communication gaps exist between customers, engineers and project managers
  • The development team doesn’t understand the politics of the customer’s organisation

And these:

  • Teams speak a different language
  • Pushing for Development to start
  • Delaying the documentation – We will document along the way
  • Keeping requirements Feasible and Relevant
  • Inadequate Review, Feedback, Closure

And many more.

Everything above is correct of course – these are all perfectly valid. But I have written previously on these pages about a trend in recent years for the technology team to be approached even without any detailed requirements at all. “Make it work” is the “requirement” – such as it is – and technology are expected not just to deliver a solution but to create requirements themselves, on behalf of the customer. I personally don’t have an issue with technology operating like this, though I have encountered people over the years that would throw their hands up in absolute horror. Even more discomfort is in store for those who (wrongly) seem to view technology merely as a ‘supplier’ of solutions – much like the people supplying the coffee machines or photocopiers…. Another topic for another time perhaps.

I have spent a fair proportion of my professional life untangling applications and processes that nobody in the company properly understood. Quite why organisations let themselves get into this situation when very often they will have built the applications in the first place (they didn’t appear out of thin air), is an interesting question for another occasion. For now though, let’s just acknowledge that we’re stuck with situations like this.

I’ve met many Business Analysts who aren’t concerned in the slightest about what I have just said. Many even purposely steer clear of looking too carefully at any pre-existing applications. The belief seems to be that looking at what already exists in some way contaminates their judgement.

To be fair, there is something in this. What already exists can be a constraint to peoples thinking. But equally it can give useful insight. We can learn from the past. The reason things are the way they are is often significant.

Technology isn’t just about building something new. Sometimes it is about getting more from what we have. Reverse engineering can help us stabilise and improve what we have, improve our acceptance testing and knowledge generally. I find it surprising it is not used more, and why so many people (analysts, developers and others) try to steer clear of it.

If we can reverse engineer a UFO (as I saw on TV a while back) why not use the technique on software more regularly?

I have an appraisal objective this year to identify the skills and techniques I believe are needed in my role as a BA. The next step is to create a scheme for rating myself against them and then, finally, to perform the rating itself.

This is similar to a skills matrix that most teams I have worked on have had to one extent or another. I don’t think many people would disagree it’s a useful tool to have, but the problem tends to be that people have a tendency to devise them from scratch each time. This is time consuming and inevitably a lot of effort is spent constructing the matrix itself rather than getting to the whole point of why it exists. That is to help us get better at what we do and to discuss the techniques themselves.

My approach has been to start by looking at what materials and resources already exist. We can be sure others have been through the process also and have created at least a basis for us.

Further, there are professional bodies representing our industry. After all, if I wanted to identify the skills and techniques that say, a surveyor used, The obvious place to start would be the professional body for surveyors – the RICS.

I have been an IIBA (International Institute of Business Analysts) member for some time so my first port of call was the IIBA’s BABOK. Specifically the ‘Techniques’, ‘Underlying Competencies’ and ‘Other Sources of Business Analysis Information’ sections. This gave a good list, if a little generic in places.

I then moved on to look at what the IIBA’s UK equivalent body – the British Computer Society have to offer. This happened to coincide with me undertaking the Business Analysis BCS training pathway (part of ISEB as it used to be called).

The various BCS BA courses all have a detailed syllabus of topics. The BA pathway is made up of a set of mandatory and optional courses. Therefore combining all of these must give a complete (or very close to complete) coverage of the techniques methods tools required by a BA.   Even better, BCS have a predefined scheme for evaluating an individual against the topics according to:

  • Levels of knowledge
  • Levels of skill and responsibility

I now felt I was really getting somewhere. But to make the interpretation of the information easier, it needed a diagram to map out the commonalities. It needed an ‘at a glance’ view of all the topics across-the-board – and how they link. Some topics appear to a lesser or greater extent in more than one course, but can still be grouped under half-a-dozen or so major ‘themes’.

This was conducted as a post-it-note-exercise initially, with each post-it representing an individual syllabus topic. These were then grouped:


Post it note exercise - Evolving the BA Skills and Techniques Under 'Themes'

And then transferred into a diagram within an Enterprise Architect Model, showing the major ‘themes’ and ‘linkages’:


Skills Matrix Model - Grouped Themes

The finished list, with marking scheme (marks yet to be added) is available here. The list was generated simply by running a report against the Enterprise Architect model.

From here I went on to rate myself against each heading, and thereby complete the exercise.

Final Note

What is possibly missing in the list above, is a way of relating the entries on the list back to the high-level groupings on the diagram.  Having said that, it is possibly easier and more productive to work from the diagram itself and annotate it while rating yourself and then transfer the results onto the list afterwards. It should also be fairly easy to plug this work into a training and development plan.

Most human endeavor ultimately comes down to a process of repetition and continuous refinement, based on some kind of feedback and improvement mechanism.

In previous and arguably simpler times, people would have just called this ‘practice’.

“Practice makes perfect”, “when at first you don’t succeed try try again” etc. is what we are taught from school. And it’s a pretty good mantra for drawing, playing tennis, cooking, driving a car, building houses, Installing kitchens and so on. But I say “most” because it starts to go awry when we apply it to large scale engineering projects. And arguably it often (or even always) goes awry for large scale bespoke activities generally – regardless of what they are.

This is because once we get into bigger and more complex things, the notion of repetition, practice and continuous refinement cannot apply. We can’t repeat the exact same thing again to get better at it, and anyway the pace of change means that what we’ve done previously isn’t necessarily relevant today.

Not too long ago I overhead someone repeating a phrase I have heard countless times over the years: “An IT project should be like MOT’ing (servicing) a car”. Well, what I have just said explains why this isn’t and can’t be the case.

From this we arrive at the theory that maybe the traditional “project” approach doesn’t really work for technology. We need a new approach of some sort.

If we take The Mythical Man Month (first published 1975 and stimulus for much work since) as the first solid documentation that “projects” don’t really work for technology, then we have had nearly 40 years of arguing and philosophising about it. The basis of this is why IT and technology projects aren’t like servicing cars.  It really is time to move on.

If you take away the notion of repetition, practice and continuous refinement, then the only other major technique we have available to us is knowledge management and education. Anyone that knows me, has worked with me or has read this Blog, will know that my view is that this is the route to successful outcomes. Not trying to re-invent the traditional “project” over and over. Not applying more and more command and more and more control.

A project will be successful if you have a knowledgeable team. Simple as that.

This is what the ‘agile’ books call “a good team”. What the project concept can’t do is make the team “good” to begin with. Personally, I think agile is a better way of running a project, but agile can only work if the team have a high level of knowledge. If that is the case they will deliver anyway. The techniques we use to organise things – be it waterfall, agile or whatever else, can only optimise a situation that is broadly already working.

I have to say though, that I am continually frustrated that the notion of the Business Analyst as champion of knowledge management and education, isn’t something I generally see many BAs being interested in. I’m not sure why this is; Admittedly it involves thinking somewhat differently, and perhaps the issue is that we are dealing with collective activities somewhat outside the fray and taken for granted. It involves volunteering information and sharing. Looking beyond the self and helping ensure people are as knowledgeable and ‘up to speed’ as possible.  Because I’ve spend a fair chunk of my career surrounded by people who aren’t, and struggle as a result - through no fault of their own.  I dread to think how much wasted time and money that equates to.

So given the rewards that can flow from all this I find this bizarre and more than a little annoying, but with the dawn of a new year comes new possibilities. I hope.

Happy new year!

In my view, projects don’t go wrong because of ‘requirements’. Requirements are merely a convenient hook on which to hang any issues encountered on the project – whatever they may be.

Projects go wrong because of a lack of understanding of how the requirements actually fit into the global game. This in turn is simply down to a lack of knowledge. There is no need to philosophise further. If you have pre-existing systems and infrastructure – as almost all projects nowadays will have – then this problem will increase dramatically if that pre-existing landscape isn’t understood. And frequently it isn’t.

I once attended a post implementation review for a project that had encountered overruns and a host of other issues. Much time was spent recounting tales of woe about the requirements. Yet it was pretty obvious to me that it wasn’t really the requirements at fault at all. The knowledge deficit (which was not the team’s fault, I would add) meant that any piece of work – big or small, simple or complex, would have caused them a problem. Even leaving requirements out of it entirely and having a bug to fix would have been a major challenge.  The only ‘safe’ way out would have been not doing anything at all.

By contrast, more recently I was involved in a complex set of changes to one of our key systems. The requirement from the business area was little more than ‘make it work’. Yet this was implemented relatively smoothly as it turned out, because – more by luck frankly – we had a knowledgeable team. We were able to shape what needed to be done based on our knowledge and come up with a set of actions that could be played back to the business for comment. In effect we were writing the requirements for them. They would then approve, reject or modify.

The whole situation could well have been very different and potentially disastrous with different people involved.

None of the above is actually to do with the ‘requirements’ per se in my view. Whilst some would argue that we should not even have gone ahead without something more substantial to start with, you also have to recognise that not everyone is good at articulating requirements. They are not always able to get their objective across, even if buried within it is the kernel of a good idea (which in my experience there generally is).  We in IT have to be alive to this. We should be prepared to put ourselves in the customers position and if necessary help create the requirements for them. The ‘alternative’ (if that’s what it is) is to do nothing and wait. This is futile and does untold damage to the reputation of our industry.  We should be in there proactively collaborating, assisting and doing whatever we can to move things forward swiftly. Not pushing people away and waiting.

Difficulty in creating requirements and expecting IT to help create them, doesn’t in any way mean the customer is disinterested or lacking in knowledge and commitment. In the example above they were anything but. They simply needed IT need to take a creative lead.  Once that had been established, they were more confident and more involved.

Neither does it mean there is some sinister rule violation taking place in IT or a flaw in our processes. I personally have no problem with IT getting involved like this and it is central to what IT should be doing in my opinion.  But it is possible only if you have a knowledgeable team.

How to get there is the big question. Sadly this is a much larger challenge than it perhaps should be.

 

I’ve written previously on these pages about the prolonged ‘future of television’ debate which has been simmering away on the back burner of the technology industry for years. The debate centres around two main arguments:

Firstly, so the argument goes, the advent of on-demand Internet and cable based services renders the traditional ‘channel’ and ‘schedule’ obsolete. Why passively sit in front of a TV or listen to a radio dispensing a schedule when you can seek out exactly what you want, when you want it? In other words, If on demand services are available, why would you want anything else?

Secondly, young people (whatever that means – that let’s take it to mean people of school age) are turning their back on traditional media. The Internet must hold the answers, since this is where they frequent.

Yet TV and radio audiences are still strong, and despite all the online and on demand services people still seem to value them and give them high approval ratings. It wasn’t too long ago I frequently read articles predicting how podcasts and on demand music services would kill off radio. But radio listening seems to be going up. In television, it is certainly true that audiences aren’t what they were: The days of shows getting 20 million-odd viewers on a Saturday night are over, but the point is those days were over long before the Internet – and certainly long before any on-demand services enabling you to watch TV content.

As for young people drifting away from traditional media, well, the argument seems to be that because they are not consuming television and radio today, they never will. This is a pretty weird idea because people’s views, opinions and attitudes change over time. How do we know they won’t ever come back later in life? Some people seem to think that by observing young people today, they are in some way predicting the future: that those behaviours will continue forever.

So I don’t accept these two arguments.

The big question is, why does all this perpetuate?

Part of the reason comes from the fact there is a portion of the technology industry that can’t come to terms with the idea of more than one way of doing something: There is something in the DNA that is focused on technology always bringing about total migration, replacement, and a single solution.

And to be fair it did. At one time. In the 1950s and 1960s say, technology would bring about wholesale replacement of football pitch sized offices of people processing billing or payroll. These were clear migrational changes and they were beyond dispute. But our modern world is different. The way people consume media and entertainment is fragmented. As a result the technologies delivering it are fragmented. People are quite happy with having alternatives, even if this involves including the tried and tested and the ‘traditional’. Having a microwave in your kitchen does not mean a conventional oven is no longer needed. Blu-Ray disks don’t mean the end of cinema. And ‘on demand’ internet services don’t mean the end of the schedule. Far from it.

Like most people, I’m happy with a bit of both please.

My organisation is going through radical change. The change is partly structural, partly a move to different ways of doing what we’ve always done. Both will, it is hoped, open up new opportunities for the future. The change is impacting pretty much all of our IT systems and services in one way or another. Either by wholesale replacement of systems (long overdue in many areas) or by altering interfaces, data structures and so on.

The interesting thing about all of this is what it does to requirements gathering. Not too long ago, the organisation would come to IT with reasonably well thought out idea, and it was for IT to scrutinise this and turn it into something that worked. Once the business analysis and systems analysis had been performed you might end up with something rather different, but not radically different, to what was proposed: People don’t generally come to IT unless they need to. Face facts everyone.

Today, the new applications and services being implemented tend to operate at a very high level and the “business” (or end users) merely contribute to requirements elaboration. They don’t submit formed “requirements” in the way they used to. Their expectation is that “it works”. That is their business requirement. That is not to say they don’t get involved – They often have detailed and passionate input into requirements elaboration and definition, but they don’t drive it. They expect IT to do that.

The conclusion, I suppose, is that we’re moving into an era where IT drive the “requirements” and don’t necessarily wait for the organisation or “the business” (a term I personally hate) to provide them.

I personally have no problem with this at all. It puts IT and Technology firmly in the driving seat of an organisation, where it belongs.

Jason Gorman writes an uncharacteristically strange article on his blog about SQL and relational databases. I take issue with this article on many many fronts and i think anyone that spends their time involved with business systems probably would. It’s just odd. There isn’t enough time to fully go into everything I have to say. Maybe a series of posts are needed.

At its heart is an issue that has been around for as long as I’ve been in the industry. The issue is that “software development” and “information systems” are in no way the same thing. Both require different methods, techniques and tools. Both often require different underlying technology. I would argue that both require a very different mindset and attitude on the part of the people involved. And, if I may say so, even a different outlook on life.

“Software development” is Photoshop. It’s iTunes. It’s Microsoft Office. It’s the latest game. It’s an app you downloaded onto your iPad. It’s also about scary things you rely on written by a section of the computer industry you never get to meet: Printer drivers. DLLs. Firmware in your Television.

That’s all “software”.

“Information Systems” on the other hand are quite different. They are about Business Systems. They are about managing business processes, running your organisation, and giving us competitive advantage and benefits of various kinds.

Thats not to say everything is different. But I don’t and never have regarded Information Systems development in the same light as Software development. They are not the same.

Having established that, It is certainly true that relational databases generally form a significant part of the underlying technology of business systems. Recording data but at the same time separating it from the applications that use and operate on it is a fundamental principle, and one that frankly, has served us well over the years and continues to do so. I don’t agree with Jason’s comments that this only leads somewhere bad, or is a practice to be avoided. For business systems, this is a demonstrably good thing.

But separating your data from your application has another big advantage that often gets overlooked. That is that the characteristics and behaviour of the data itself can help influence your design. In Jason’s “outside in” approach (to be fair, supplemented by the much of current thinking based around “user centred” design), people forget that data can take on a sort of life of is own. By doing proper data analysis, we can actually discover scenarios and behaviours we or the users would never have thought of. All of this comes BEFORE we design the external interface – or at least before we finalise it. We actually end up with a better, more rigorous, design as a result of using a database.

Back in the day, data-centred techniques were often criticised for being “slow” and “complicated” – people seemed to forget that the reason for the “slowness” was because the technique was helping you tease out some of the unglamorous obscurities of the system that you or you users might not have considered.

Therefore it’s time well spent.

Of course, you wouldn’t build a game, iTunes, or a printer driver that way, but that’s not the industry I or my fellow business systems people are in. It’s not “software”…..