But evidence suggests that such mega projects are usually money pits where funds are simply swallowed up without delivering sufficient returns.
Faced with the worst global recession since the Second World War, governments all over the world are considering a stark choice: pump billions of dollars into public spending to help stimulate their economies, or risk a rise in unemployment and a fall in output from such particularly recession-hit industries as manufacturing and construction.
A number of major publicly funded initiatives have already been given the “green light” to help kick-start their national economies and throw a lifeline to “at risk” industry sectors. In November 2008 China announced an internal economic stimulus package of USD586bn for infrastructure projects, Italy announced an €80bn capital injection plan and Russia announced an economic stimulus package of USD20bn. India has also earmarked USD475bn and in February 2009, the US announced a USD787bn stimulus package aimed at improving its own infrastructure, as well as boosting the national economy and keeping unemployment levels down. Most European countries have also announced funds for infrastructure projects. In February France earmarked EUR4bn for national infrastructure improvements, and Germany has approved its biggest economic stimulus plan for over 60 years, worth around EUR50bn.
In fact, even before these recent announcements, investment in massive infrastructure projects has never been so high. During 2004-M2008 China spent more on infrastructure in real terms than in the whole of the twentieth century, with the country building as many miles of high-speed rail as Europe in two decades. Furthermore, of the estimated USD22trn that is going to be spent on infrastructure improvements globally over the next decade, half are for projects in emerging economies. It is the biggest investment boom in history.
But governments may be throwing these vast sums down the drain if the harsh lessons from other mega infrastructure projects are not learnt. The Channel Tunnel cost double its original budget and only returned a profit 20 years after the project started. Denver’s international airport saw its eventual cost triple from what had originally been planned, and Sydney’s Opera House – as amazing as it might look – still holds the world record for worst project cost overrun at 1,400 percent over budget. Its construction started in 1959 before either drawings or funds were fully available and when it opened in 1973, 10 years later than the original planned completion date and scaled down considerably, the building had cost AUD102m rather than the meagre AUD7m budgeted.
In fact, nine out of ten projects routinely overrun on costs – a shocking figure that has been largely constant for 70 years. And the problem is global. Major projects examined from around the world have all suffered from the same drawbacks – poor project understanding, poor risk management, and poor leadership resulting in a major mis-match between expectation and reality. According to project management consultancy PIPC’s Global Project Management Survey released in February 2008, 94 percent of respondents from the UK and Europe say project-based working is of critical importance to their business success, yet just 30 percent say they successfully deliver projects. US companies seem to fair marginally better. Fifty-five percent of US companies state that project management is not regarded as critical to business success, yet 40 percent of projects undertaken are successfully delivered. Meanwhile, 21 percent of businesses in the Asia Pacific region believe project management to be critical to their business success, but projects initiated are unlikely to deliver the desired results: only 18 percent are delivered to time and only 23 percent to budget.
So why do projects go so badly wrong? There are three main explanations. The first is that the data which the assessment is being based on is inadequate – or just plain wrong. “Sometimes project teams pick up an idea and run with it before critically evaluating the desired outcomes and alternatives – they are just being too keen to get started,” says Mark Westcombe, a lecturer at Lancaster University Management School. “Even when they stop and think first, this often gets forgotten once a contract is handed over from senior management to a project manager,” he adds.
The second reason is due to what is called “optimism bias”, whereby people are overly optimistic about what can be achieved with the resources and deadlines available and talk up the benefits of what the project will achieve, rather than look at what it will take to get the project to deliver. “We naturally believe we can achieve more, in less time, than historical data demonstrate,” says Dr Cliff Mitchell, senior fellow and deputy director of the BP Managing Projects Programme at the Manchester Business School at the University of Manchester. “There is also a Western bias towards unrealistic macho management: we can get it done – we just need to drive harder.”
The third reason is “strategic misrepresentation” – in other words, deception. “There are perverse incentives and rewards for making the project look good on paper in order to win the contract, so companies deliberately provide clients with ambitious and unrealistic cost estimates and delivery timetables in order to win the work,” says Professor Bent Flyvbjerg, director of the BT Centre for Major Programme Management at the University of Oxford’s Saïd Business School.
Experts say that failure for poor project management and delivery rests squarely with management and its inability to plan sufficiently or define what project “success” actually means. Dan Hooper, director of Piccadilly Group, a business and technology advisory group specialising in risk management and independent assurance, says that a major obstacle is that very few companies know how to define and agree on what the success criteria of a project should be or how to measure that “success”, such as the benefits the project was sold on.
“Each stakeholder will have different requirements and perceptions of what a project should deliver,” says Hooper. “As a result, the majority of stakeholders will feel the project failed to deliver their expected benefits. This is due to not being able to prove the project was a success in tangible terms, such as producing cost savings, or in terms the stakeholders can understand.”
Hooper also says that organisations still tend to opt for the cheapest bid, rather than service quality, because they believe that the operations they want to contract out are simple to perform. He cites organisations outsourcing their call centres as an example. “A company outsources its call centre and makes an operational saving of 50 percent. But, because of the poor service, the company suffers from high customer turnover leading to a true cost that is greater than the initial saving due to lost revenue. As a result, the project is then deemed a failure.”
Keith Braithwaite, head of technology at consulting firm Zuhlke Engineering’s Centre for Agile Practice, believes that the leading cause of project failure is poor or misunderstood requirements from the outset. In his experience of managing IT projects, he says that “it is not so much that programmers do a bad job of writing the code (although that does happen) but that they write the wrong thing.”
He adds: “One response to this syndrome is to try to ‘fix’ or ‘nail down’ requirements before designing a solution to address them. On very small projects this can almost be made to work, but on large projects the time spent doing requirements engineering has the unintended consequence of increasing the probability that the wrong thing will be built. All the while that requirements are being gathered and analysed the world is moving on. With very large projects this can result in a system being built to address (at best) the needs of an organisation from several years in the past.”
Such examples abound in long-term public sector IT and defence projects due either to the original technology becoming rapidly outdated during the project’s lifecycle, or the client wishing to update other systems and equipment with the same technologies as an extension of the project. In its report released on 12 March this year the UK government’s spending watchdog, the National Audit Office (NAO) found that the National Offender Management Service’s (NOMS) plan to build a single offender management IT system for the prison and probation services had not delivered value for money in the five years it took to build. The NAO found the project had been hampered by poor management leading to a three-year delay, a doubling in project costs and reductions in scope and benefits. In fact, the core aim of the original project of a single shared database of offenders will not be met.
The project to provide an IT system to support a new way of working with offenders was to be introduced by January 2008, and had an approved lifetime cost of GBP234m to 2020. By July 2007, GBP155m had been spent on the project, it was two years behind schedule, and estimated lifetime project costs had risen to GBP690m – nearly triple the original estimate. The NAO found that the project “suffered from four of the eight common causes of project failure in full and three in part”.
Other European countries have also suffered a drain on the public purse through poor planning and over-optimism. In January 2003, Toll Collect—a consortium of DaimlerChrysler, Deutsche Telekom, and Cofiroute of France—was scheduled to start tolling heavy trucks on German motorways for the Federal government. Within 12 months the project was falling apart. The developers had been too optimistic about the software that would run the system. The government was losing toll revenues of EUR156m a month, caused by delays, and estimated to total EUR6.5bn before problems could be fixed. For lack of funds, all new road projects in Germany and related public works were put on hold, threatening 70,000 construction jobs. The German transport minister cancelled the contract with Toll Collect and gave the company two months to come up with a better plan, including how to fill the revenue shortfall.
Experts say that the ultimate responsibility for project failure – and success – is down to management, especially their role in the leadership of the planning and implementation stages. But what can often happen – and is evident in high profile failures – is that senior management tends to defer responsibility to a project manager once the deal has been signed off. Peter Andrew, principal consultant at management consultants EA Consulting Group, says that “once projects get the green light, senior management tend to take a back seat and leave it to a project management team that were not party to the negotiations and have no relationship with the contractor to manage. There also tends to be a culture that any problems that occur during the project’s lifecycle are the fault of the project manager, because senior management drew up the actual plans which only need to be followed and implemented. It is little wonder, therefore, that problems are not flagged up, or are not flagged up in time.”
Alistair Maughan, partner at international law firm Morrison & Foerster, largely agrees. “If managers spent half the time actually looking at whether the project is necessary or whether the figures are right rather than the legal details of the contract, a lot of grief would be saved,” he says.
“There is a tendency for executives to spend more of their time hammering out the details of the contract rather than actually plan what the project is actually supposed to deliver, and why these deliverables are necessary,” says Maughan. “As a result, the plan can be fatally flawed from the outset, but the terms and conditions of what the contractor is supposed to be doing – even if they’re wrong – are just about written in stone. This means that senior management could be handing some poor project manager – who was not party to the negotiations – some total turkey that he is responsible for delivering and could easily take the blame for if (or when) it goes wrong. The governance of these projects, both in the initiation and in the delivery, are often poor and are key areas where they often fail.”
Professor Flyvbjerg says that there are ways to improve the situation. He says that institutions proposing and approving large infrastructure projects should share financial responsibility for covering cost overruns and benefit shortfalls resulting from misrepresentation and bias in forecasting, which helps align incentives.
The UK has already wised up to this after criticism of central government departments’ inability to manage large-scale projects. The Department for Transport now has a requirement for all large infrastructure projects that ask it for funds to have a minimum local contribution of 10 percent (25 percent for light rail) of the gross cost in order to gain program entry, upon the belief that “if an authority has a financial stake in a scheme, this provides a clear incentive to ensure that the right structures and resources are in place to bring it to fruition to time and budget.” It has also started to put a halt to planned projects that have inaccurate budget forecasts.
On top of that, local authorities are liable to pay 50 percent of any increase in the cost of the scheme over the quantified cost estimate up to a designated approved scheme cost. So, for example, if a project is estimated to cost GBP100m but the Department for Transport has agreed that the project could go as high as GBP140m, then the projected GBP40m overspend would be shared equally between the government and the local authority. However, if the project ended up costing GBP180m, the Department for Transport’s share of the GBP80m overspend would still be capped at GBP20m (half of the original agreed overspend limit) – leaving the local authority to pick up the remainder.
Besides making local decision-makers more accountable for managing projects, Professor Flyvbjerg says that there are established methodologies now to massively improve project forecasting. For example, he says, organisations can make wider use of reference class forecasting (RCF) to improve due diligence at the start of the project and throughout its lifecycle. RCF is a benchmarking tool which seeks to raise comparisons between the current project and those of a similar class and scale. Once these projects are compared, the organisation can get a better view of the probable budget, timeframe and actual deliverable benefits.
RCF was first used – and successfully – in the construction of the Edinburgh tram system in 2004 whereby cost estimates and delivery times were studied in 46 comparable rail projects. By examining their outcomes, planners could get a clearer indication of the potential problems they could face and get a more realistic idea of what the eventual budget would be. As a consequence, the original estimate was increased by over a third and the project was realised within the revised budget. RCF is now mandatory in some UK government projects, such as UK Treasury projects that cost over GBP40m and Department of Transport projects costing more than GBP5m. The system is also used in The Netherlands, Denmark, Switzerland and South Africa.
However, such forecasting methods are still not used widely, and where they are used, they are mainly restricted to relatively low-cost projects (though it is being used on the GBP16bn Crossrail project). Therefore, one can still expect a slew of mega-projects to continue to fail, and trillions of dollars to vanish.