ITMPI FLAT 002
Main Menu
Home / IT Governance / Using a Data-Driven System for Effective Capacity Planning

Using a Data-Driven System for Effective Capacity Planning

Like Ryan Haveson says for TechNet Magazine, there are “no medals handed out” for working extra hard if a deadline has already passed. But estimating work is difficult, and sometimes you just do not have enough people to get the job done right and on time. Haveson writes with a solution for better capacity planning.

Do, or Do Not

He first addresses what to do when he gives someone a mission-critical task and the person replies, “I’ll try.” To Haveson, “I’ll try” basically means “I’ll do my best, but if I screw it up, I warned you,” and so Haveson would rather people just tell him, “No.” This is because Haveson can plan around “no,” whether it means adding more people to the project or changing task scope. “I’ll try,” is too ambiguous to plan around, and so Haveson recommends creating a culture where it is okay to say “No.” Under the right circumstances, it does not need to be a sign of weakness, especially because saying “Yes” and failing will look much worse.

However, sometimes upper management is not impressed by such a strategy, in which case Haveson says you should use a data-driven method to really confirm how much work you can do with current resources. There are three factors to consider:

  1. Track estimates versus actuals.
  2. Track incoming/fix rate/backlog.
  3. Model the points of scale.

For that first point, Haveson elaborates:

If your team works in a model where you estimate work before starting (either in a waterfall or Agile model), then start tracking the cost estimates versus the cost actuals on a per-person or per-team basis. From there, you can create and publish tables of which teams estimate accurately and which do not. Work with your leads or team members who are still learning how to estimate to help them improve. In the meantime, at least you’ll know your error bands. That way, when you set a schedule, you’ll have some data with which to model how much buffer time to include.

The second point is more or less self-explanatory, but the final point has to do with team workload scaling according to the number of users. If you can find correlations in the scaling, then you may be able to predict future workload and resources required.

To learn more about all of the points highlighted here, you can read Haveson’s full article: http://technet.microsoft.com/en-us/magazine/dn198617.aspx

About John Friscia

John Friscia is the Editor of Computer Aid's Accelerating IT Success. He began working for Computer Aid, Inc. in 2013 and continues to provide graphic design support for AITS. He graduated summa cum laude from Shippensburg University with a B.A. in English.

Check Also

4 Symptoms and Solutions to Security Fatigue

The news has led us to believe that security breaches are happening everywhere, and, well, …

Leave a Reply

Your email address will not be published. Required fields are marked *