Risk management fails in IT, and Richard Stiennon (chief research analyst of IT Harvest) has a few points to prove this belief. He claims that the risk management systems that work in business cannot be translated to IT, especially given that the very first problem with risk management in IT comes from the first step of risk management: identifying assets: The concept behind risk management is that you assign a value to each asset. There are many algorithms for doing so. It usually involves a cross-functional team meeting and making at least high-level determinations. But it is obviously impossible to assign a dollar value to each IT asset. Is it the cost of replacing the asset? That might work for a lumberyard, but an email server might have a replacement value of $2,000 while the potential damage to a company from losing access to email for an extended period could be millions of dollars in terms of lost productivity. What about the value of each email? How much is one email worth? Ten cents? Zero? What about the internal email between the CFO and the CEO on the last day of the fiscal year warning that they missed their targets? Its dollar value is zero, but the risk from that email getting into the wrong hands could be the loss of billions in market capitalization. The list goes on: it's hard to impossible for a group to assign value to IT assets: how do you put a value on email, and is an email from one person more important than another? And what happens when you identify and IT asset that has little to no value: wouldn't that asset naturally be eliminated? Another point is how risk management eventually becomes a practice of protecting everything, and that generally results in a single score, which isn't very effective at all. Give this article a look and let us know what you think: is Stiennon just sour on the idea of risk management or are there some valid points in his arguments?