Managing project uncertainties is usually a matter of choosing between two methods. The first method relies on empirical evidence, hard stats from past projects. The second method draws on theory to arrive at predictions. At his blog Herding Cats, Glen Alleman explores the possibilities of both approaches.
Paradigm #1: Empirical Bootstrapping
With the empirical data, it is possible to re-sample past assessments to create a future projection of the project outcome. This “bootstrapping” of the data resembles the theoretical approach called Monte Carlo simulation, but with some important differences.
With bootstrapping, you get the data you get. There’s no control of the probability distribution function (PDF), nor is there a model for needed performance, just some information about the past. Additionally, there’s a near definite chance that the actual future project will not identically resemble the future projection, which is just a mirror of a past model. It may fit within past parameters, but in unpredictable ways.
Paradigm #2: Monte Carlo Simulation
By comparison, a Monte Carlo simulation (MCS) dispenses with the statistical harness and allows an algorithm to generate random data, to model the data and to collect a result. This is purely theoretical investigation, without any empirical content. MCS can function without past models. This is its key advantage over bootstrapping. Alleman must have the final word:
Bootstrapping can only show what the future will be like if it like the past, not what it must be like. In Bootstrapping this future MUST be like the past. In MCS we can tune the PDFs to show what performance has to be to manage to that plan. Bootstrapping is reporting yesterday’s weather as tomorrow’s weather – just like Steve Martin in LA Story. If tomorrow’s weather turns out not to be like yesterday’s weather, you gonna get wet.
Read the original post at: http://herdingcats.typepad.com/my_weblog/2015/05/monte-carlo-simulation-of-project-performance.html