I was sitting in my graduate level statistics class when it hit me, Expected Value calculations could be used to solve my project contingency budget problem!
Earlier that same day, I was sitting in a project review with my senior management. The project team had identified a number of risks with the project. I included a contingency budget task in the schedule based on that qualitative risk assessment. The management challenged me on this inclusion and said the team was artificially inflating the estimates.
The project itself involved a lot of moving pieces, with external vendors, timed deliveries of equipment and geographically dispersed personnel. The project ran nine months, ending in October. We had factored in things like vendor delays, employee sick time, etc. I thought the risk analysis was a reasonable precaution. However, management said “Prove it!”
I Need Days, Not Rankings
Every risk assessment article I had seen at the time involved the use of a qualitative risk ranking (High, Medium, and Low). This qualitative assessment didn’t meet my needs to prove that the amount of contingency was correct.
What I needed was a way to quantify the risk in days, so that I could create a task of X days to track the contingency. It was also the first time this organization had seen this type of risk analysis so the analysis needed to be effective but not overly complex.
Let’s Get Statistical
Back in my 3½ hour statistics class, we also reviewed the Pareto principle in that 80% of your outcome impacts are likely the result of 20% of your events. We also discussed Expected Value calculations and Normal Distribution and how all of these techniques could be used together.
Normal Distribution, which you may know as the “Bell Curve”, occurs in many settings and charts the probability distribution for a given scenario. All points on the curve has an associated Z-Score, which can be used to create an Expected Value result for a specific occurrence.
My epiphany was that by identifying a small number of project risks and calculating the Expected Value of each risk in days, the sum of the Expected Value outcomes could be used as the duration for my contingency task. It should be big enough of a sample to cover most of the potential project variance.
Risks are a way of trying to quantify variance in your schedule. Each task finish date can be thought of as a little bell curve and the sum of those individual task finish variances decides where your final project finish date occurs.
The idea seemed to have merit. I did some reading to validate my idea and found that the disaster recovery planners do a similar calculation for assessing risk. They also add an expiration date for a given event and a number of potential occurrences for a given time frame. Expiration dates are needed, for example, if you have a risk that a vendor will not deliver some equipment on time. Once delivered, the risk expires as it is no longer needed.
Imagine my dilemma. How the heck am I going to explain this concept to my team without it sounding like a lot of work?
Another consideration is that most people don’t think of probability in terms of a number. They use language like:
- Very Unlikely
- Very Likely.
You may be familiar with the word Sigma, as in Six Sigma. Sigma is a measure of variance around a mean. 97+% of all outcomes will occur between -2 Sigma and 2 Sigma. Anything beyond +/- 2 Sigma is exceptionally unlikely. Each Sigma point had a corresponding Z-Score that is the probability of an event happening at that point on the curve.
To make this user friendly, I mapped the language terms above to the Z-Scores at the corresponding -2, -1, 0, 1, and 2 Sigma points.
My core calculation is [Expected impact in days if risk occurs] * [Likelihood it will occur, Z-Score] * [the number of possible occurrences in time period] assuming that the expiration date had not passed. I needed to capture this in a spreadsheet at the time and I needed it to be easily understandable to the user.
The resulting spreadsheet captured the Risk, the impact as measured in days, expiration date and a dropdown for likelihood that the risk will occur. We formulated a risk list of ten items and found our calculations added two days of exposure to our original estimate of contingency. Ten items seemed like a sufficient sample without creating a lot of additional work to formulate the list.
For example, one of our vendors was in Miami and had a key deliverable in late August. I grew up on the Gulf Coast and knew this was peak hurricane season. If a hurricane hit the area, they would have a week of downtime.
Originally, we were thinking this was an unlikely event. One of the team members pointed out that the National Weather Service was predicting a higher than normal number of hurricanes for the season. The team then upgraded the Risk rating to Possible. The risk was then documented as shown in the table below. We did this for each of the risks and the sum of the values was the duration of the contingency task.
The new analysis was introduced in the next management meeting. They were dubious but they allowed us to use it in our project. As risks occurred, we kept track of them and used days from the contingency budget. We encountered a number of issues along the way, some anticipated and a number that were not.
We ended the project only one day over our contingency budget date. Considering we had 28 days of contingency, the management reaction to a 1 day slip was much more muted than communicating a 29 day slip. We also knew why we consumed 28 days of contingency, which gave management confidence that the situation was being actively managed.
I’ve used this basic technique successfully on other projects where we were able to increase project on-time rates from 35% to 70% on-time. This technique also gets your team in the right mindset as the analysis is reviewed every status meeting and gets them thinking about how to address risks proactively.