Monte Carlo analysis has been around since the 1940s when it was used in the complex calculations required to build the atom bomb. Now it’s performed to gauge risk on construction projects, but if not used properly it can be a misleading tool. Faithful+Gould’s Mike Gladwin explains how to get it right
Construction is risky. We deal with prototypes in unfamiliar locations and conditions, where working relationships are new, as are the contracts. Levels of design can vary from ‘fag packet’ to ‘belt and braces’. Client knowledge also varies. So is it any wonder that construction projects often run over time and budget?
In 2005-2006, 55% of projects were completed late, 54% exceeded budget and 23% had defects that impacted on the client, the DTI found. Further, the client was not satisfied with the service they received from the construction industry on 21% of projects.
Construction, therefore, is a fertile ground for tools that can help predict and, ultimately, control project problems. Some of these tools are better than others and some, if incorrectly used, can be a hindrance rather than help. One such tool, Monte Carlo Analysis, is definitely powerful, but it can also be misleading and therefore dangerous if used incorrectly.
Be sensitive
Monte Carlo is as a detailed sensitivity analysis. Imagine a scenario where everything on your project goes well: tenders come in low, the design is right first time, you don’t hit any problems and England win the World Cup. Unlikely, but it might happen. Then look at the flipside: Everything goes wrong, tenders come in high and you encounter all sorts of problems on site.
Now, think of every conceivable scenario in between and consider how likely each is. This is the basic output needed from a risk analysis. If there are many more bad scenarios than good, then the project will probably go wrong if preventative steps are not taken.
To get the number of scenarios needed to make an analysis statistically robust would require a fantastic imagination and a lot of time. So, we turn to the computer. We cannot imagine every combination of factors for every possible scenario but we can identify the important factors in determining the possible scenarios. It is these factors that are used in risk modelling.
A change of scene
This brings us to the first important point. If you get the model wrong, you’ll get the scenarios wrong. Get the scenarios wrong and your results are worthless.
How do you know if you’ve got the model wrong? That’s where the golden rule comes in.
Every one of the thousands of scenarios that the model produces must be possible in real life.
For example, if your model could produce a scenario whereby the project misses the same ‘construction window’ twice due to two risks, when it would only conceivably happen once due to any combination of risks, then your model is flawed. An experienced modeller will spot this and design the model accordingly.
There are other factors to consider when modelling, such as the effect the level of breakdown of an estimate, or the amount of aggregation, or disaggregation, has on the results. There is also the level of interrelation between the components of the model – this is called correlation and can have a huge impact on the results.
Risk analyses are based on establishing the likely outcome given a number of variables. You cannot use Monte Carlo to model one or two risks – compare the chances of accurately predicting the outcome of flipping four coins with that of flipping 200.Therefore, you can’t break your model into small components and still get reliable results.
The results
The results are typically shown as a ‘cumulative probability curve’, usually an ‘S’ shape. It shows all of the values for each scenario and plots them against their percentile. This is called an output distribution.
On the example graph, we can draw a line vertically from the cost axis at the point of the base cost, excluding contingency. It crosses our ‘S’ curve at the 20% point. This means that in the analysis, 20% of the possible scenarios came to the base cost or less. In other words, 80% of the possible scenarios exceeded the base cost. Or, put another way, there is an 80% chance that the base cost will be exceeded. Now that’s useful information, so what are you going to do about it?
The benefits
Targeted risk management
You should be able to prove or disprove the model by examination (if you’ve built it well). You’ll also be able to tell where the major risks and uncertainties are. You can then direct your efforts to reducing the impact of the most important risks.
Contingency definition
You might also want to add contingency, but how much?
Let’s go back to the graph. Draw a line from the 50% point on the vertical axis to the curve then down to the horizontal axis. The difference between this point and the base cost is the amount of contingency that must be added to improve our confidence from 20 to 50%.
At this point there is only a 50% chance of overspending. This is a good target for a project team, or contractual parties on a target price. However, it might not be enough for funding approval or business cases. After all, there’s still a 50% chance of overspending. You might want to draw another line at the 80% point, or at 90%. Now you’ve got a much lower chance of overspending, but a much larger contingency. Remember, the amount of contingency is dictated by the shape of the curve, the shape of the curve is dictated by the model, and the model is dictated by your risks – see below.
Contingency drawdown
You’re halfway through a project and have spent three quarters of your contingency. Have you got enough left? Should you be looking for more already or giving some back to the centre? Update and re-run the model to find out. It shouldn’t be too difficult because you are keeping your risk register up to date, aren’t you?
Option Comparison
Say you’ve got two options. Both appear to cost the same but are different concepts under different conditions. Is one more risky than the other? Run both models and compare the results.
Definition of a target price
A target price is often required under procurement options that have profit/loss sharing arrangements. Therefore crucial importance should be placed on the definition of the target price and the guaranteed maximum price.
By building a model, all parties to the agreement can see which risks are included or excluded and the valuation of those risks. The results of the model will therefore be transparent and agreement of a target price much simpler, as shown below.
What next?
Now you’ve built the model, you can use it to optimise your project. What if another party, insurer or contractor, for example, takes on a risk? Naturally they’ll charge you for it. But will that charge be more or less than the ‘cost’ of keeping it? Run the model with and without the risk and see the effect.
If you spent money on mitigation measures, will it pay off? Again, run the model unmitigated and mitigated. Compare the change in output to the cost of the mitigation.
How do you influence stakeholders to recognise that some proactive action on their part will reduce the risk to the project and themselves? Run the model assuming timely decisions, approvals and so on, and compare this with the standard model. They’ll then see the impact of their influence in cash.
Lastly, you must be careful to ensure you have taken account of residual and secondary risks when running new scenarios. Residual risks are those that are left after some mitigation is undertaken and secondary risks are new risks that stem from the action you have taken.
Downloads
The results
Other, Size 0 kbContingency definition
Other, Size 0 kbContingency drawdown
Other, Size 0 kbOption Comparison
Other, Size 0 kbDefinition of a target price
Other, Size 0 kb
Source
QS News
Postscript
Mike Gladwin is the UK head of risk and value management at Faithful+Gould. To read his views on the software used in Monte Carlo anlysis, visit our website – qsnews.co.uk
No comments yet