The data is in, and an alarming truth has come forward that IT organizations just are not documenting their failures well enough. When lessons learned become lessons hidden, what does that say for IT? In an article for IEEE Spectrum, Robert N. Charette shares some examples of especially severe mistakes in the past decade’s data on IT development projects and operational failures.
In Search of Failure
Charette finds that one of the largest problems with the data IEEE collected was that not all of it was trustworthy. The number of projects in their planned report rapidly began to dwindle to around 200 because of the lack of reliability in the failures. This lack of quantifiable information was true for all sectors, particularly because corporations and the government are rather hesitant to advertise their major blunders. When specifically looking at the government’s reports, some of the input numbers regarding costs or the schedule were altered by the agencies that wrote them, making the data unreliable to study.
One of the noteworthy situations was the $1 billion U.S. Air Force Expeditionary Combat Support System (ECSS). This seven-year project was audited on a multitude of occasions, even by an impartial third-party, and all of the findings still came back inconclusive as to the final amount of money spent.
This kind of lack of dependability cannot continue. In the future, when companies publish their data, it would be immensely helpful to add in a simple chart or timeline; this would show at a quick glance the numerical data that would prove most helpful in studies similar to this. It is also important to make it explicitly clear if the project has been extended, re-scoped, or reset:
Don’t forget to indicate how this deviation affects any of the aforementioned statistics. Finally, if the project has been canceled, account for the opportunity costs in the final cost accounting. For example, the failure of ECSS is currently costing the Air Force billions of dollars annually because of the continuing need to maintain legacy systems that should have been retired by now.
The financial numbers are important, but equally vital are the other statistics. If a project is failing, what are the consequences to the people both internally or externally? This kind of insight sheds an important ray of light on whether everything is working properly. Ontario’s C$242 million Social Assistance Management System (SAMS) is an excellent example of this; despite the system not working correctly, the government still publicizes it in an optimistic manner.
You can read the original article here: http://spectrum.ieee.org/riskfactor/computing/it/how-to-shine-a-light-on-it-project-failures