IT GovernanceRisk Management

Interview with Tim Lister – Risk Requirements and Identification

Could you tell us a little about yourself and your background, how you got where you are and what you are working on today?

TIM LISTER: I started in this business as an Assembly language programmer in 1972, right after graduating from Brown University. Fred Wang, the son of the founder of Wang Computers, was a good friend of mine at Brown. He actually had a computer installed in his dorm room which was rare in those days. Fred introduced me to FORTRAN and together we began to write little programs to crunch numbers for labs. I became pretty good at it. As a senior, I was thinking about going to graduate school for a PhD in English but, at twenty-one, I really needed a break from school. Fortunately another Brown friend, who had graduated the year before and was working in computers down on Wall Street, urged me to come down there for an interview. I got the job and began working with an exciting young group. In 1972, it seemed everyone in the field was young. My boss's boss was only twenty-nine. We had much more freedom than novice programmers have today. We were writing messy systems in Assembly language and moving code into production on our own signatures. It's hilarious to think what we could have done if we had been malicious. The head of our group was a real experimenter, more of a systems intellectual than a strict manager and he was particularly interested in software development. Consequently he hired a young Englishman out of Hopkins Ltd. to join us in 1973. This young Englishman had exciting new ideas about software and eventually turned out to be Michael Jackson of software design fame. We took classes with him and he stayed and consulted. He helped us build control level macros and to write rigorously structured code in Assembly language. Soon all our control structures were at a higher level than just branching. This got me intrigued by the structure of programs and systems. I began jumping around within the company and even got a technical team lead in 1975. I wanted something close to the cutting edge, however, and signed up for a course on structure design which Ed Yourdon was giving. I was a real wise guy in class, asking a lot of questions, but I must have impressed Ed because he offered me a job with his company. I joined Yourdon Inc. as its eighth employee. It was a very exciting place to work and, as it grew, all sorts of wonderful, people like Tom De Marco, Jerry Weinberg, and Tom Klum came aboard. Eventually I was put in charge of the training and consultant group. By 1983, however a lot of us were no longer happy with the direction in which the company was headed, so Tom DeMarco and several others including me decided to form our own company, the Atlantic Systems Guild. We are still together, four of us in the States and three in Europe. We still have a grand old time consulting and running seminars. We still enjoy pestering each other with our ideas about building systems. Tom DeMarco and I are particularly close, having worked together for twenty-five years on a great variety of projects. We co-wrote the first edition of Peopleware in the early nineties and then the second edition in 1999. We had become intrigued by the idea that all interesting software projects were full of risk and uncertainty. Back in the early nineties, risk management was an avant-garde field. As consultants, we started to talk to our clients about risk and we wound up writing Waltzing with Bears about the evolution and future of risk management for software-intensive projects. Our central idea is that risk management is as fundamental a component of projects as scheduling and tracking. Right now, I have a lot of irons in the fire. I continue to consult on large-scale projects which pose a combination of technical and organizational challenges. I'm also interested in very early front-end processes: the whole requirements issue, the whole notion of unearthing risk requirements, and — even before that — the whole question of how projects are born. Most organizations say they have a backlog of projects. But how do they decide which ones to run and which to keep on backlog? And how do they shape the notion of the project? Who makes the call? In the early nineties, almost everyone was worried about software process improvement and project efficiency. I had a horrible feeling back then that many companies were really efficiently doing the wrong projects. I am interested in unearthing the decision making that gets a project chosen for development.

How would you actually define risk management and its key components?

TIM LISTER: Risk management is the process of unearthing both uncertainty and risk in projects. It asks what the unwanted possible consequences of an event or a decision are. The essence of risk management software is to help us decide whether to deal with problems before they appear or to wait till they clearly emerge and then deal with them as problem management. I've been in software all my life, so I may be biased, but I think software particularly lends itself to risk management because more often than not you have to fight early problems rather than late ones. A classic early problem occurs when a looming deadline looks tight and you may need to hire more people to make it. Getting people early and integrating them into the project can help you enormously, whereas waiting too long to hire and train additional people is often wasteful and useless. Risk management is about understanding when to make decisions. It involves a conversation among all of the stakeholders – the technical people, the sponsors, the users, the managers – about the best time to make decisions. At risk time, we ask the question of whether to spend some money to lower the probability of a problem or how to lower the cost should the problem occur, or some combination thereof. Risk management assumes that careful study of a plan will result in someone spotting a potential problem. Risk management says, “Let's identify and understand the risks early, determine their root causes, and decide whether it makes good business sense to spend money before or when the problem hits.” There is no way to identify all the risks at the start of a large project and manage them down. A great risk manager is one who not only watches the risk at hand but also searches through consultation with others for new risks likely to appear as the project's environment changes. The major components of risk management are identification, evaluation, prioritization, and strategizing. Just because somebody identifies or “nominates” (a term I prefer) a risk early doesn't mean we are going to do anything about it. We might accept it and pay for it later if we judge there is no advantage in tackling it now. Another aspect of evaluation is judging the probability of a risk becoming a problem. Are we looking at a one in a thousand or a 50/50 risk? And then there's cost evaluation. If the risk hits, what's it going to cost us in terms of manpower, delay on schedule, money? In the prioritization component, we ask what are the most important risks in terms of probability and cost. Finally, there is the strategizing component where we ask such questions as the following. When do we make the call? What are our options here? How long can we delay before we take action or should we act immediately and mitigate the risk up front?

Do you have any data on the effectiveness of risk management?

TIM LISTER:   People always ask this. Getting solid data is just about impossible for two reasons. First, you never get to run the project more than once, so you can't really know whether you were effective or lucky. Some risks you identify become huge problems but sometimes they just don't. Do you want to say, “Oh, my risk management prevented that,” or do you want to say, “Oh, it was just the roll of the dice that time around?” The other obstacle to amassing a solid body of data is that most companies don't want to share information on problems that hit their projects. I've tried to talk to people I thought did great jobs and they don't even want to write a conference paper with me about their risk problems and management of their successful projects. Their legal department jumps in and says, “We aren't going to talk about our dirty laundry.” There are, of course, exceptions. For example, there is Rockwell Collins out in Illinois which has one of the best software risk management groups in the United States. In some of his papers, Art Gemmer, who has worked at Rockwell for years, discusses the numbers and cost savings resulting from their risk management. Rockwell Collins itself has been very open about what they have done and are doing through risk management. But these are exceptions. The metrics are few and far between.

How then would you characterize the current state risk management practices throughout corporate IT organizations?

TIM LISTER: Sadly, I would say the vast majority of organizations are not practicing real risk management. A small minority do it very well. There are also some who say they do it, but what they do is identify risk and go back to business as usual. They may have a little step early in their process that says, “Identify risk, evaluate risk,” but there is no evidence they do anything with that information. In genuine risk management, you change something on a big project based on rigorous risk assessment. You change your development strategy, you change the sequence, the definition of the project, the schedule, the staffing, and you keep a detailed record and rationale of the decision making which led to such changes. This kind of risk management is rather rare.

Could you give us a list of the most common project risk factors?

  TIM LISTER: The first one is technical risk. Most commonly this involves using a new product or tool for the first time. You know you're not an expert at it and you're trying to figure out whether the tool does what it says it does or whether we'll have to find something else because we don't know how to use it. We also have to estimate the cost of the learning curve. This issue of technical novelty is very common in our business. I like to joke with my clients about the fact that once you get really good at something, you never use it again. We finally master a tool and boom ““ the world changes and there are new toolsets, new ideas and new messages that confront us. The second big risk category is a set of organizational risks. Schedule risk is one of the most common and serious of these. With almost all projects a deadline is set too early. From a risk point of view, you need to keep a sense of humor about this. Typically, an organization decides they have to release a product by say the second quarter of 2007 even though they have very little understanding of the process. This creates a whole family of risks on expectations. Then the parameters are set; for example, we think ten people should be able to do this in the next ten months. From a risk management point of view, the schedule and budget may be based on mere wishful thinking. A good risk manager discovers this as the definition of the project's work is revealed. He then calls for real estimates rather than guesswork. I think we get horrible problems in our business from childishly set deadlines with no credentials. We need to do reasonable estimations and then either shape them from there or cancel them. Finally, there is the risk factor of communication. I see failures of communication all the time, particularly with the growth of multi-site projects over the last ten years – not only multi-site but multi-nation. I've got a project right now that's in Massachusetts, Ireland, and India, all at once! You want to have one architecture, one design, one product and you discover that's there's a huge risk that these teams won't stay locked together. You have a lot of extra work to keep them coordinated. The problem is always in the interfaces. Each team thinks they are doing fine but they may be out of whack with the others. Consequently, you have to do a lot of work to prevent large scale problems at the end. Then each team thinks it has its piece done and we discover that they just do not hook together. I guess I'm getting old but I feel it's nice occasionally to find a project where everyone's in the same building. Then you can talk to each other every day and brainstorm when problems arise. That this is so rare now adds enormous complexity and increased chances for communication risk.

Once potential project risk factors are identified on a general level, what comes next? What is the next step in the risk management process?

<strong>TIM LISTER: Coming out of identification you have a list of nominated risks. They are not managed yet because we are not guaranteeing that we'll do something about them. This is just a list of potential problems. What we do now is sort them according to the individuals in the organization who have knowledge about these particular risks. For example, I might take the technical risks and sit down with the QA leader and some testers to get to root cause analysis of them. Many people are great at giving you symptoms, but few are good at giving you root causes. For example, if you ask a typical development team, “Do you have any risks at this stage?” They'll probably say, “Sure, we could be late”. Being late, however, is not a risk. It is the outcome of one or more risks firing off and causing a schedule delay. There can be a number of reasons on a given project causing delays. Calling lateness a risk misses the whole point. If, however, you work backwards and, for each risk, ask how and why it could cause a delay, you are doing real analysis to get at the core risks. So, I classify the risks carefully and bring them to small groups of people who have experience with each particular category of risk. In getting this way at the core risks, I can get some handle on probability and impact. After assessing probability and impact, the next step is brainstorming about how we can make the risks disappear. If we had infinite money, what would we do to turn probability to zero? Or, if we don't, how can we soften the blow? After we come up with possible responses, it typically becomes the project manager's call. It's all about making decisions: what is our containment strategy when a problem appears? It's all about eliminating, or at least radically reducing, the likelihood of surprise. That's basically the flow of our work.

Must organizations have a metrics program solidly in place before they can expect to be successful with their risk management efforts?

TIM LISTER: Regarding quantification and metrics in risk management, it all depends on the history in individual companies. Too often they keep no history of risk to problem transitions in the past so they can't go back and say, “We've seen this risk twenty times on projects and five times it caused a problem” Then I can say, “Okay, it's a 25% probability.” I'm rarely this lucky however, so we wind up using a “low, medium, high” scale to characterize risks and their probable costs. I think, however, that people exaggerate the need for quantitative data. Risk management is really a conversation about decisions. It would be great to have nice precise numbers, but the technological environment is always changing. The very fact that we really define and discuss the risks is ninety percent of the game. Problems on projects hardly ever result from esoteric risks, or bad luck. They are usually foreseeable and preventable and, if they do occur, at least we can soften the blow. So getting the risk list down and discussed and scrubbed is critical and valuable.

Must organizations have standard processes and tracking in place to be successful with risk management?

TIM LISTER: I think I'm going to surprise you. Practical tracking and planning is, of course, valuable. But the big issue is really cultural rather than quantitative or metric. There's not just Brownian motion going on out there. The fundamental issue is how well the people in your organization deal with straight talk. Too many organizations just have a hard time learning about what might go wrong before it does go wrong. In too many companies, it's dangerous to be frank and say, for example, “I think we are very unlikely to finish this new project in ten months.” People will say, “You haven't even tried, come on, give it a whirl; you're just trying to get out of work; you're whining.” There's a very strong, “rah, rah, we can do it” attitude, especially in top management. If you can't say what you believe without incrimination, you have a big problem. I remember talking to one organization about risks and having them tell me, “You know if you bring up a problem here, you own it.” This happens on major performance issues where the boss typically says, “You're absolutely right. I want you to handle that.” When that happens, no one is going to open his mouth. Instead he will think, “I'll act shocked and surprised when the problems hit because I'm not going to be the person responsible for a performance miracle with an underpowered system.” So I think it's largely cultural. I am not a sociologist, but I think Americans have the hardest problem in the software industry with risk management. I think it is used more often and more effectively in Europe. Years ago I was in Finland and they were so good that I had nothing to say to them. They are much more frank about problems. It's the same in The Netherlands at Philips: the way they talk about their problems and make their decisions is very straight forward, dispassionate, realistic. I wish we could bring that to the early stage of our projects as well. On the other hand, what makes America great is the way we do things because we don't know we can't do them.

If you feel that cultural issues outweigh process and metrics issues in this area, how would you characterize the role of tools? Do you believe it's possible to master risk management without a significant investment in tools?

TIM LISTER: You can get a huge benefit without much investment in tools if you are doing risk management as part and parcel of project management. As Tom DeMarco said in one of the books we co-wrote, “That other project's problem is your risk.” After all, they are right down the hall from you and they're doing the same kind of systems. So the problems they run into are your problems too and these should be on your risk list. You can start to use tools really effectively only when you have created the habit in an organization of searching out the risks and pressures it continually faces. Most of the tools are not trackers, they're Monte Carlo simulators. These allow you to run thousands of trials and to see how things go. These simulators help you answer a lot of questions. For example, what's very counter- intuitive when you put a bunch of risks together? Or, which is the most likely outcome? Or, what is the best you could do if you get really lucky and what's the worst if everything went wrong? There are useful tools around, but you don't need to get into the cockpit of that plane until you have done some basic work first. I do, however, think that the process always needs tools. But you become dangerous if you don't know exactly how you want to use them. For instance, automated tools can make a big difference in testing, but you have to understand the overarching process to select the right tools. That's especially true in risk management. At the start of a project you don't need fancy tools. As you progress, though, and get more sophisticated, a simulator that can do a lot of “what ifs” makes a lot of sense. But don't open up your wallet on day one, that's for sure.

For people who understand and have experience with the practice of risk management, at what point does it become necessary to have such a tool, one that runs hundreds of thousands of simulations? For what kinds of projects would that be a critical necessity?

TIM LISTER: For the really big multi-year and multi-site projects, especially those long enough to span a technology cycle. The kind of tool you describe is especially useful, for example, if you have a project in 2006 which you hope to bring to market in 2011. People leading the project are making decisions based on 2006 technology and wondering what the technology is going to look like in five years. A question the leaders have to ask is whether it is acceptable for the new project to be middle-stream or trailing stream rather than cutting edge when it goes to market. With a long project there are enormous risks because the business and technology worlds are going to be changing under your feet and you are going to have to make some hard calls. That's when you really need the kind of tool you described. With a lengthy project uncertainty obviously increases. There are two major kinds of uncertainty that you see in software. The first, which we call “common origin,” is about destination. You don't know where you are going and what you are really building. The second kind is the uncertainty of journey. You know what you want to build, but you're not really sure how you want to get there. If you are facing both kinds of uncertainty on a project, even a small one, you will really need the simulator. One of the things I see more and more clearly as a risk manager is that so much development work is really a response to very frequent risks in software development. Small iterations usually result from uncertainty of destination. Software designers tell each other, “Don't try to wrap this all at once because they are going to change their minds anyway.” On the other hand, if you are certain about destination, the usual methods may not be the most appropriate. When people expect to find a single “right way,” I just roll my eyes. As if life was so simple! All of the properties common to actual methods used are in response to standard risks and problems in software.

 You have written a wonderful book on risk management with Tom De Marco: Waltzing with Bears. For anybody who would like to learn about software risk management, is there any additional reading material you might be able to recommend?

TIM LISTER: He's a hard read, but I am a great fan of Robert Charette. His big book, Software Engineering and Risk Analysis in Management, came out in the late eighties but Charette is still around. Anything he has written is worth reading. He currently heads up the risk management group at Cutter Consortium in the Boston area. Even if you are not a subscriber, you can go online to their site and pull down articles by him under the “risk management group” tab. Another book I really like is Charles Poorow's Normal Accidents. He presents a critical view of high-risk technologies. He discusses all kinds of systems with built-in warnings and safeguards and argues that, looked at statistically, these really don't lower the complexity of the systems. The statistical probability, according to Poorow, is that there will be failures anyway which the warnings can't prevent. He presents all kinds of examples which I find very useful in mapping projects. Steve McConnell has written some good stuff on software development and rapid development in one of his books. I can't remember the title but a big section of it is very useful. Another valuable resource is the SEI. They had a risk management group that lost its federal funding ““ I guess the government doesn't believe it has risky projects. They were an excellent group which held international software conferences each year and published really valuable papers. They did a risk taxonomy which is very useful. It is basically a risk checklist with about two hundred questions to ask about your project at its start. It offers a way to jump start the whole nomination process. And all of this is free.

Biography of Tim Lister

Tim Lister is a principal of the Atlantic Systems Guild, Inc. He is presently involved in assisting organizations with IT risk management and in tailoring methodologies and selecting tools for software development groups to increase project productivity and product reliability. He is also pursuing work on metrics for making the efforts of software projects more predictable. Mr. Lister is also a Fellow of the Cutter Business Technology Council, a member of the Leadership Group of Cutter Consortium's Enterprise Risk Management & Governance practice, and a Senior Consultant with Cutter's Business-IT Strategies, Agile Software Development & Project Management, and Sourcing and Vendor Relationships Practices. Mr. Lister and Tom DeMarco are coauthors of Peopleware: Productive Projects and Teams and the risk management book Waltzing with Bears: Managing Risk on Software Projects. They are also authors of the popular Achieving Best of Class seminar as well as the course and video sequence Controlling Software Projects: Management, Measurement, and Estimation. Mr. Lister also serves as a panelist for the American Arbitration Association, specializing in disputes involving software and software services. Our interview between Tim Lister and Michael Milutis, Executive Director of the IT Metrics and Productivity Institute took place in September of 2006.

Show More

Leave a Reply


We use cookies on our website

We use cookies to give you the best user experience. Please confirm, if you accept our tracking cookies. You can also decline the tracking, so you can continue to visit our website without any data sent to third party services.