31 October 2010

The project failure & cloud connection

Around 2000, I worked on a project with a co-worker – he was maybe fifty and had worked in the IT industry – initially at IBM then contracting and consulting for nearly thirty years. At the time; he was heading up a web-development and creative team for a small consultancy firm which he saw as a new opportunity for him. He had a good reputation, people liked him and he had a vast amount of experience from being involved in literally hundreds of projects – developing, managing, business developing etc.

One day he slips into conversation – as an aside really; that he has never worked on a project that was delivered. Every project he worked on had been cancelled, merged into another programme, didn’t meet its financial goals (as with infrastructure consolidation) or his involvement ceased before it ever went live (if any of them ever did – he didn’t know). He said it in a resigned manner, almost as a badge of honour that he had got through all that with no job satisfaction in terms of delivery payoff whatsoever.

I have seen this scenario repeated many times in the decade since; although perhaps none quite so extreme. I have myself worked on hundreds of projects in many roles and can count the number of successful systems deliveries on one hand. People naturally want to associate themselves with successes rather than failures so it is perhaps understandable that this is not a common consultant topic.

A whitepaper released late last year attempts to put a figure on the worldwide cost of IT project failures. This turns out to be $6.2 Trillion and it doesn’t look like sensationalism. The US in particular is apparently losing almost as much money per year to IT failure as it did to the financial meltdown (with no end in sight). The paper makes an attempt to factor in what it calls indirect costs; basically a lost opportunity cost from the time wasted on failed/abandoned projects. It does not however take into consideration the wider indirect costs of people training for careers that are not actually delivering, IT staff disillusionment (turnover), operational failures for delivered IT systems (one in five businesses lose £10,000/hour through systems downtime) and associated security failures.

The paper has received some condemnation (due to its base assumption of a 65% IT project failure rate) but there is dearth of analysis in this area and quoting this figure is about as good as it gets right now. 50-80% figures have been quoted in one form or another for decades. CIO thinks rates are actually rising due to the recession. People have the choice of either working with a figure (challenging assumptions/statistics etc) or burying their head in the sand.

We will never know the exact figure for IT project failure. Similarly, we will never know whether efficiency/functional benefits of those systems that have worked have paid for the failures i.e. What has IT really given us? We are simply reliant upon these systems going forward. What can we do to reduce future project failure rates?

Although there are superficial similarities with scientific/experiment and IT/project communities respectively, the "accept defeat" approach of the former is routed in constant learning whereas IT is surely about delivering benefit now. Learning is only the priority for the largest, most stable, lowest turnover organisations; that is to say - next to none of them. The scientific regimen of independent assessment however is invaluable for IT projects. Tool-based PM consultants such as Bestoutcome are probably as good as the big management consultants for this purpose though. Techniques from the engineering community (when introduced to IT) have not had a huge effect.

The paper ends with a call to arms to simplify – IT/business communications, projects goals etc.

I agree in principle but this is oversimplification - when your realm of influence is the organisation you are in. I have seen many projects that although conceptually simple and with genuine IT/business agreement start to fail the moment integration with other organisations – vendors/hosting providers/recruiters/sub-contractors/data sources are required. Despite solutions being simple and manageable inside your organisation – just a few touch-points with others (basically anything worth doing) cause them to be complex and therefore unpredictable/at risk no matter what your collective capabilities. There is even a case for blinkered-simplification/procedures actually contributing to project failure: Complexity at least brings an element of flexibility, allowing you to react if the project starts going bad.

Better project managers, SOA, ramped-up co-worker involvement, Facebook-like "hackathons", daily IT/business meetings, PRINCE certifications, extranets, more rigorous cost control and mathematical complexity models within your organisation will have minimal effect on the success of your multi-touch-point projects – for they are already in the realm of chaos. Even improved PM collaboration through tools such as Asana will have a minimal effect on success rates. The role of the “good project manager” is perhaps the most scrutinized, personality-driven, divisive and misunderstood of all IT positions. Radical open enterprise models such as BetterMeans that effectively remove Project Managers in favour of automation and decentralized decision making will similarly be ineffective. Although in the case of the open enterprise model, I do agree that this will ultimately prevail (crowd-sourcing, creativity enabling and ultimately efficiency/cost) but not for decades (due to the need to directly attack the failure rate first).

Although there are certainly sizeable success increases to be made if you are experienced in the particular technology, the IT project failure rate will only consistently and materially fall once there are flexible cloud services that organisations can get 80% of all their needs by just subscribing to them.

Integration ceases to be the bottleneck. The other 20% being “secret sauce” value-add-ons developed in-house; probably mainly algorithms that, by definition, do not require integration with other organisations or heavy project governance. Other components of the 20% would be device specific exploitation code; essentially building the so-called App-Internet model (rather than full-Cloud). Organisations already recognise the economic justification for cloud computing so it is perhaps inevitable that project failure rates will eventually fall by default.

Cloud transition will take organisations years yet however. Both InfoWorld and Gartner have arrived at 2013 for broadly when the majority of organisations will run cloud. This may be optimistic. In the interim (three more years+ of high project failure rates?), delivery is simply better served by being built upon cloud solutions now; building partnerships with cloud providers if they can and leveraging their buying power if they cannot. Also, interim/limited functionality cloud solutions must be considered in preference to bespoke development/on premises deployment. In a real sense the project (that they would have otherwise completed themselves) becomes a creativity-driven; architectural investment and commercial partnership instead. Both Project Management and Enterprise Architect roles will need to shift accordingly.

It would be interesting to see any future IT project failure analysis split between organisations that have implemented virtualisation, those that have implemented private cloud and those that have implemented public cloud. The project failure and cloud connection is not well documented.

No comments:

Post a Comment