Website Upgrade Incoming - we're working on a new look (and speed!) standby while we finalise the project

Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

Schedule Risk Analysis: What Is It and Why Do It?

21 replies [Last post]
Emily Foster
User offline. Last seen 2 years 25 weeks ago. Offline
Joined: 19 Aug 2011
Posts: 625
Groups: None

Hi Guys,

Here's a good description of what Schedule Risk Analysis is and why you should do it http://bit.ly/wu4Jf2

We are going to follow up with a tool review next week.

I hope you find it useful.

Emily

Replies

Mike Testro
User offline. Last seen 36 weeks 11 hours ago. Offline
Joined: 14 Dec 2005
Posts: 4418

Hi Tony

And it will be the one that no one even thought was coming.

Best regards

Mike Testro

Tony Welsh
User offline. Last seen 8 years 17 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Yes,  Dennis.  I do not like the term muh myself either.  If you like it is a less pessimistic version fo Murphy's law; if lots of things can go wrong at least one of them probably will!

Dennis Hanks
User offline. Last seen 8 years 32 weeks ago. Offline
Joined: 17 Apr 2007
Posts: 310

Tony;

If your point was that 'merge bias' exists, then I concede your point.  I wish we had a another term than 'merge bias' to better convey to clients that complexity (multiple paths) is a major reason that the deterministic date has such a low probability of being achieved and not necessarily the 'riskiness' of the project (if that distinction/separation can be made).  'More things can go wrong, so they probably will go wrong'.

Tony Welsh
User offline. Last seen 8 years 17 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Denis, for some reason I have only just become aware of your post of 12/2/2!  You say you fail to see my point, but in fact you are making the same point.  It is nothing new, just the fact of merge bias.  In your example, you say there is a 31% chance of finishing in 35 days.  Without taking account of merge bias -- for example if one were to use the PERT method -- one would come up with a 50% chance of finishing in 35 days.  And if one used the expected durations in a determiistic CPM one would also come up with 35 days.  The triangular distribution makes the bias less pronounced than the uniform distribution, but it is still substantial.  And in a real network there may be multiple merge points and the bias builds up.

Rafael Davila
User offline. Last seen 1 week 6 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241

In the following reference you can find an example of the Merge-Bias that shows how it is not merely about adding averages as if a cost estimate.

Of course, some merge points have several paths converging, and the opportunity for delay at such points is thus magnified. Clearly, path merge points can never be good news for a risk analysis. The increase in project risk at merge points, called the “merge bias,” is the subject of Case 2.

http://www.coepm.net/wp-content/uploads/Whitepapers/wp_scheduleriskanlysisA4_web.pdf

I am busy at an assigment but hope the example can be of help.

Good luck,

Rafael

Dennis Hanks
User offline. Last seen 8 years 32 weeks ago. Offline
Joined: 17 Apr 2007
Posts: 310

Tony;

If I understand what you are saying, we have a 6X6 matrix with each cell having an equal probability of occurence.  Which means out of 36 possible outcomes only once will we complete the project in 1 day, 3 times for 2 days, 5 for 3, 7 for 4, 9 for 5, and 11 times it will take 6 days.   Or 2.8%, 8.3%, 13.9%, 19.4%, 25%, or 30.6% respectively chance of finishing the project on the specific date. 

If I do this in PRA, I have two activities with an expected duration of 35 days (3.5 days - set up for whole days, so I multiplied by 10) with a max of 60 days and min of 10 days, and assuming a triangle distribution (no longer a equal probability of occurence) then you get 31% chance of completing the project in 35 days, 90% chance in 52 days, 50% chance in 40 days, but only a 10% chance of completing the project in 28 days or less [divide by 10 to compare with matrix results].  SRA will give a 50% chance of finishing on or before 4 days vs. 44.4 % for the matrix, this is due to the PDF used (triangle). [4 or 40 was the only whole number represented in both models].  I fail to see your point.

I assert that we can arrive at a reasonable expected value by averaging a suitably large population.  How would you arrive at the expected value?

FYI: I nice example of merge bias.

Rafael Davila
User offline. Last seen 1 week 6 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241

Emily,

From a software reference - Because each solution in a risk analysis must at least be feasible, it should not violate any resource limitations that exist. Each iteration must be resource-leveled if some resource(s) is (are) limited. The risk analysis software package should be able to level resources as it is iterating.

Does the add-on includes resource leveling? I have my doubt because MS Project resource leveling algorithm works by delaying activities using sticky constraints difficult to erase for every iteration, especially in older versions. Because of this peculiarity Microsoft recommend not using their own resource leveling functionality but archaic manual leveling which is done either by using soft logic or date constraints and even worst if you combine both.

Best Regards,

Rafael

Emily Foster
User offline. Last seen 2 years 25 weeks ago. Offline
Joined: 19 Aug 2011
Posts: 625
Groups: None

Hi Guys,

Good discussion here. As promised when I first posted, we've just completed a review of Full Monte which is a cost and schedule risk analysis add-on to Microsoft Project. You can read it here http://bit.ly/wnMkPo

I hope this is useful,

Emily

Tony Welsh
User offline. Last seen 8 years 17 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Dennis

 

First of all, sorry to have mixed you up with Ralph.  (Sorry to Ralph too!)

I of course agree that SRA is a major step forward, but i think you are still missing the point.  It is necessary to do MC simulation for SRA because the task durations in a project network are not typically just added together.  Some are in series,  some in parallel, and the result is not susceptible to analytical solution.  And this is true even if we have a very precise knowledge of the average duration. 

For rexample, if you take two parallel paths of the same length, and asssume that either can take any integer number of days from 1 to 6 with equal probability.  (A daft assumption which allws me to simulate with two dice.)  The average duration of eitehr ath is 3.5 days, known theortically and exactly)  but the average time it takes to finish both tasks is about 4.5 days.  And the distrbuton is a triangle instead of being uniform.

 

Rafael Davila
User offline. Last seen 1 week 6 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241

Dennis,

I like the river example very much, is perfect.

Reminds me of an issue I provide support to my wife regarding her work with transportation cycles of Containers. For some orders they have the same specifications and the average matters as a Container equals the other. One Container late will be supplied by one early. But with special orders each Container is unique and the average means about 50% of probability of delivering on time as the sample statistical analysis confirms.

This has been an issue impossible to teach to the sales force that insists in promising not average which is wrong (just 50% probability of meeting target delivery if distribution is uniform) but earliest which is worst.

The observed times last year ended up with a 60% cumulative probability at average, a bit better than if a symmetrical distribution but still short of a safe delivery promise.

Photobucket

..... somewhere in the Tropics

The same goes for any job, the average duration yields too low probabilities of success, you need the statistical distribution to make better decisions, and even an approximate distribution will be better than just using the average. But do not forget the curve for the total job distribution curve on a CPM is not the sum of the distribution curves of the activities on the deterministic critical path. A wrong assumption of initial PERT as it was missing the effect of changing critical path as some non critical activities in the deterministic schedule become critical, shifting to the right the total duration distribution curve, the reason why we need Monte Carlo instead of a simple equation that considers a single static critical path.

Best regards,

Rafael

Dennis Hanks
User offline. Last seen 8 years 32 weeks ago. Offline
Joined: 17 Apr 2007
Posts: 310

Tony;

You addressed this to Ralph, but it was my comment.  

I think we have a different appreciation for averages.  If I want to walk across the river, then the maximum depth is more important than the average.  If I intend to swim/row across, then the average is irrelevant.  If I want to calculate the flow at a specific point, then the average depth is very impport - Q=VA (where A is the average depth times the width of the river at the point in question).  

In O&G, we have years of experience with rates of production - unit rates.  If I have 100,000 data points that say, on average, I can expect to spend X number of hours performing a specific task, then I can safely estimate that it will take me 100X to perform 100 of those specific tasks.  While it is true that a few may take 2X and many others take .9X, at the end of the sequence, I can expect to have spent 100X hours.  

If it takes me more or less, then I need to review my original unit rate for any adjustments/assumptions that I might have used - location, time of year, quality of craftsman, and make the appropriate modification.  This data then becomes part of the database (averages) used in subsequent estimates.  

I think this is sound methodology.  SRA tries to account for the 'quality' of the estimate assumptions/adjustments before the event.  While this may inject some 'bias", I think this is still sound methodology and a 'major' step forward.

Rafael Davila
User offline. Last seen 1 week 6 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241

I agree with Emily with regard to the importance of using Monte Carlo or other valid statistical methods.

I agree with Deveaux with regard to DRAG, it has its place with statistical method as in summary the theory is that in order to increase the probability of success you must target for an early date in the hope of finishing on time. But targeting for earlier time comes at a cost and here DRAG is of value in combination with other tools. Usually after using methods such as adding shifts, increasing crews or changing logic it is still not enough and DRAG helps.

Definitively there is an element of uncertainty on lag values and Deveaux is correct that it will provide for better modeling if Monte Carlo simulations include this uncertainty on lag. We got to live with the limits of our models, even with the issues regarding which probabilities distributions we shall use I would dare to say a probabilistic forecast is better than a deterministic.

Best regards,

Rafael

Stephen Devaux
User offline. Last seen 2 weeks 10 hours ago. Offline
Joined: 23 Mar 2005
Posts: 668

Hi, Tony.

Regarding DRAG, I had to Google it.  Seems it is your own invention. 

Nope.  I didn't invent it.  Critical Path Drag was there (along with all types of float) on every project, both planned CP Drag and As Built CP Drag, back when the pharoahs were building the pyramids. Whether the pharoahs and others ever did any critical path analysis computations, I don't know -- but it was all there.

I named Drag (which I don't use as an acronym anymore), I figured out its huge importance to schedule optimization and recovery, its implications for cost, resourcing levels, and ROI, and ways to compute it -- but I certainly didn't invent it, any more than Vladimir Liberzon did when he programmed the computations into Spider Project. It's always been there, plain as the nose on one's face, and it's as amazing to me as to anyone else that no one had pointed it out (and no s/w computed it) before I came along.  (But then, I'm always amazed that no one had married my wife before I met her, too!)

There are discussions of CP Drag elsewhere in PP.  But here are four articles that might help, if you're interested:

The most recent is in the current issue of Defense AT&L Magazine:

"The Drag Efficient: The Missing Quantification on the Critical Path".

http://www.dau.mil/pubscats/ATL%20Docs/Jan_Feb_2012/Devaux.pdf

The other three are at ProjectsAtWork (one by William Duncan, author of the 1st edition of the PMBOK Guide:

"Scheduling Is A Drag":

http://www.projectsatwork.com/content/articles/246653.cfm

And three more by me:

"DRAG Racing on the Critical Path":

http://www.projectsatwork.com/content/articles/234282.cfm

"Paving the Critical Path":

http://www.projectsatwork.com/content/articles/234378.cfm

The last one is about how to use Drag computation to manage critical path resourcing and staffing, an area where a Monte Carlo System might add significant value.

The three ProjectsAtWork articles require registering on the site, but it's free.

"Regarding references on lognormal, there is one link and two references at http://barbecana.com/lognormal-distribution/

Great, I absolutely will read it, first chance I get.  I'm getting to the point in my university classes where I'll be teaching PERT and MC systems, so this will help inform what I say to my students.  Thanks, Tony.

Fraternally in project management,

Steve the Bajan

Tony Welsh
User offline. Last seen 8 years 17 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Not sure where to start in answering recent  comments.

Steve:

The issue of distributions on lags may be a valid reason for not using any existing MC system, but it would be easily fixed and is not a theoretical objection to MC.  (Some systems do allow % lags, which I think would address the issue at least wrt your example.)

Regarding references on lognormal, there is one link and two references at http://barbecana.com/lognormal-distribution/

Regarding DRAG, I had to Google it.  Seems it is your own invention.  Sounds like something which might be worth doing, though it does not seem to fit in particularly with MC or with risk analysis in general.

You mention risk registers and such as being useful, and they are.  I think this is as much as anything a terminology issue.  Quantitative risk analysis, which I believe can  only be done by MC simulation, is answering a different question than qualitative.  Quant is concerned really with general uncertainty whereas Qual is concerned with the “black swans.”

Ralph:

“My world (Oil & Gas) does benefit from an extensive history, so reliance upon unit rates (averages) is valid.  Other industries, I cannot say. “

This misses the point.  The reason that the average does not adequately represent reality is nothing to do with how well we can estimate the average.  It could be absolutely exact but it would still not tell us everything we need to know.  (An example from the Flaw of Averages is the statistician who drowned in a river whose average depth was only 3 feet.)  And sometimes, as in project networks, using averages introduces major bias. 

In general, this is because if F is a non-linear function of x, then there can be a big difference between F(average(x)) and average (F(x)).  Another example from flaw of averages is of a drunk walking down the center of the road, or trying to, and staying there on average.  So, F(average(x)) is “alive” whereas average(F(x)) is “dead”!

Stephen Devaux
User offline. Last seen 2 weeks 10 hours ago. Offline
Joined: 23 Mar 2005
Posts: 668

"Yes, idiots are there, but then so are poorly constructed plans/schedules which are far more problematic, at least for me."

Dennis, as I mentioned in the response to Tony, we are in violent agreement on this! And there is a high correlation between idiots and bad schedules.

There is certainly more than one way to flay a feline...

Fraternally in project management,

Steve the Bajan

Stephen Devaux
User offline. Last seen 2 weeks 10 hours ago. Offline
Joined: 23 Mar 2005
Posts: 668

Hi, Tony. 

Thanks for your response. Let me try to be a bit clearer -- I certainly don't resent anyone investing more time and effort into scheduling.  The state of the discipline is so pathetic that anything that improves it is to be lauded. I just don't think that Monte Carlo adds enough to justify a great deal of additional effort and cost.

Let me try to address your points:

"Steve, you point out that PERT utilizes a 3-point estimate, but the issue is what does it _do_ with these three estimates.  It uses the well-known formula to produce a single number which is meant to represent the expected value (though it is only an approximation), and then uses this value as if it were a deterministic one.  (snip) IMHO that makes a nonsense of PERT, which is why it was discredited ver shortly after it was first used in the 50’s."

Well, we're pretty much in agreement on traditional PERT.  I think it adds a tiny bit in cases where someone has only the sketchiest estimate for duration.  Since the mean of the three estimates will usually be greater than the mode, the (probabilistic) formula will produce a deterministic estimate that at leat takes into acount the sometimes-long "tail" of the pessimistic.  But no big deal -- back-of-the-napkin sanity checks.

For the dangers of using averages as if they were all you need to know, I recommend Sam Savage’s book, “The Flaw of Averages,” though this is not especially concerned with project management. 

I will order it.  I'll also recommend Nassim Nicholas Taleb's book The Black Swan.  Also not concerned with project management, but it has great relevance for the Trigen distribution.

The problem of determining the correct distribution to use is a real one, but given that we have to make decisions in an uncertain world it seems to me that we have no choice but to try. 

Indeed, we have to realize that we live in an uncertain world (and, admittedly, some fools don't realize it!), but as Taleb points out, it's not necessarily a Gaussian world -- and those tails of uncertainty can inflict painful wounds!  Yes, we need to manage uncertainty through contingencies and schedule reserves -- but I would much rather see effort being put into schedule compression and optimization (resulting in more profitable schedules and larger reserves) than believing that running a Monte Carlo system gives an answer that is any better than the quality of the data that were input.

You are also correct to point out that there is a big difference between a triangular and a PERTbeta distribution with the same absolute end points, but there is very little difference between a triangular and a PERTbeta with the same standard deviation.  (The so-called Trigen distribution helps here, though it is really just a different way to specify a triangular distribution.)  There is quite a bit of empirical evidence in favor of the lognormal distribution.

I'd honestly like to read that evidence, if you can point me.  The traditional estimates suffer from variable definition: when one person says "pessimistic", they mean a number they might miss 20% of the time; when another person says "pessimistic", they mean a number they will probably still achieve if a nuclear war, a comet, and a gamma ray burst all arive at the same time! The Trigen distribution helps define an estimating point, but at the cost of:

  1. In effect, two more estimates needed for each activity (the % chance of being below that number);
  2. Factoring out even the "little" black swans, i.e., the medium-sized but impactful tails on the Gaussian distribution.

The numbers I looked at seem to change the delta from the PERTbeta by about 6%, rather than the 10-15% of the triangular.  But since I (and most of the time the user) don't know what the distribution should be anyway, I don't know if that's good or bad.  Maybe it SHOULD be the triangular -- I'm just saying there's nothing precise about it, and a lump reserve estimate based on the overall riskiness of the project is a lot simpler and likely to be just as accurate. But I'm willing to be persuaded.

I have not before come across the requirement to put distributions on lags, but there is no reason why it should not be done.  I will consider it for our next  release.  (Yes, I have an axe to grind, being the developer of Full Monte for MSP, see www.Barbecana.com)   It certainly is not a theoretical reason to object to MC. 

I have to disagree with that last sentence.  If the MC does not vary lags, it's a huge problem. Again, if X has estimates of 5, 12 & 24, and is an SS 8 predecessor of of Y, Y's start date is the beginning of Day 9, whether X finishes on Day 5 or Day 25. That is a problem!   If X takes 5 days, Y should probably start about Day 3, and if X takes 25 days, X probably shouldn't till about the beginning of Day 17.  That's a huge difference.

However, if you want to REALLY add something new and valuable to Full Monte, have it compute the probabilistic Critical Path Drag and Drag Cost.  Now THAT might really have value, showing the PM where to staff up. And I don't know of any MC that does that calculation.

Finally, you say there is an attempt to make Schedule Risk Analysis synonymous with Monte Carlo simulation.   Since there is no other way to do schedule risk analysis  I would say that this is legitimate

I don't agree that there is no other way to do schedule risk analysis.  A risk log with estimates of schedule impacts and reserves (before both merge points and at the end of the project) can have significant value, whether or not an MC system is used.

Fraternally in project management,

Steve the Bajan

Dennis Hanks
User offline. Last seen 8 years 32 weeks ago. Offline
Joined: 17 Apr 2007
Posts: 310

Steve;

As to the big step vs. small step, I still hold that we are far more capable of estimating eventual outcomes than we were just a few years ago.  Since nothing in our world is precise, from estimates to earned value, I am comfortable with the level of accuracy provided by Probablistic Risk Assessment (PRA 8.7.3, to be exact).

As to ranging lags, not a big issue in my world since I/we try to avoid them.  I guess if I was really concerned, I would convert them to a 'dummy' activity and range that.

You are right, random is not strictly correct since the sample population is not randomly distributed in the cited distributions.

Yes, idiots are there, but then so are poorly constructed plans/schedules which are far more problematic, at least for me.

 

Tony;

My world (Oil & Gas) does benefit from an extensive history, so reliance upon unit rates (averages) is valid.  Other industries, I cannot say. 

As to BetaPERT vs. LogNormal, I relied upon the Pertmaster (PRA 8.7.3) summaries:

 

"The BetaPert distribution uses the same parameters as the Triangular (minimum, maximum and most likely duration) and is similar to the Triangular in shape but its extremes tail off more quickly than the triangular. Using the BetaPert distribution suggests a greater confidence in the most likely duration. It can be used with a wide range between the minimum and maximum durations as the probabilities of hitting the extremes is a lot less than if using the Triangular distribution. It has been found that the BetaPert distribution models many task durations very well. [emphasis added] If in doubt use Triangular as this is more pessimistic. If you are using BetaPert distributions then it may be worth considering a BetaPert Modified distribution. This distribution gives you more control over the shape of the distribution and allows you to be more or less optimistic about the most likely duration." "This (LogNormal)distribution is useful for modeling naturally occurring events that are the product of several other events. For example incubation periods of infectious diseases are often log normally distributed. The Log Normal distribution may be used where the distribution is defined by its mean but where the shape is essentially skewed. Values are truncated at zero." Would welcome further discussion on distribution shapes, especially as relates to modified BetaPERT. 
Tony Welsh
User offline. Last seen 8 years 17 weeks ago. Offline
Joined: 10 Oct 2011
Posts: 19
Groups: None

Steve, you point out that PERT utilizes a 3-point estimate, but the issue is what does it _do_ with these three estimates.  It uses the well-known formula to produce a single number which is meant to represent the expected value (though it is only an approximation), and then uses this value as if it were a deterministic one.  In the symmetrical case, it is actually no different from a deterministic CPM using the middle value.  The main problem however is that it still considers only one path, and so does not take account of merge bias.  IMHO that makes a nonsense of PERT, which is why it was discredited ver shortly after it was first used in the 50’s.

For the dangers of using averages as if they were all you need to know, I recommend Sam Savage’s book, “The Flaw of Averages,” though this is not especially concerned with project management.

The problem of determining the correct distribution to use is a real one, but given that we have to make decisions in an uncertain world it seems to me that we have no choice but to try.  You are also correct to point out that there is a big difference between a triangular and a PERTbeta distribution with the same absolute end points, but there is very little difference between a triangular and a PERTbeta with the same standard deviation.  (The so-called Trigen distribution helps here, though it is really just a different way to specify a triangular distribution.)  There is quite a bit of empirical evidence in favor of the lognormal distribution.

I have not before come across the requirement to put distributions on lags, but there is no reason why it should not be done.  I will consider it for our next  release.  (Yes, I have an axe to grind, being the developer of Full Monte for MSP, see www.Barbecana.com)   It certainly is not a theoretical reason to object to MC. 

Finally, you say there is an attempt to make Schedule Risk Analysis synonymous with Monte Carlo simulation.   Since there is no other way to do schedule risk analysis  I would say that this is legitimate.

Stephen Devaux
User offline. Last seen 2 weeks 10 hours ago. Offline
Joined: 23 Mar 2005
Posts: 668

Hi, Dennis.  Some points:

"Steve;

PERT provides a deterministic value [(O+4R+P)/6]=D." 

Right, Dennis -- a value based on a probabilistic input, that says that the mode estimate will be achieved twice as often as the sum of the optimistic and pessimistic. That is probabilistic, producing a quasi-beta-shaped distribution.  Traditionally, the mean of those inputs was usually used as the estimate (resulting in an estimate that is usually slightly greater than the ML estimate).  But sometimes schedulers would decide that, for specific reasons, an estimate greater or less than the mean was more appropriate. Whatever, the resulting estimate was still the result of "probabilistic" inputs.

"Probalistic risk assessment (Monte Carlo) will randomly take a value from this range - O to P (PDF) each iteration, then group the results."

Not randomly.  It will select the values based on the user-selected distribution shape. And the vast majority of users, with little reason to use one shape over another, will select a default, either the Triangular Distribution [O+ML+P)/3] or Beta Distribution (O+4ML+P)/6].

"Assigning a PDF can be relatively easy, if the underlying assumption is that most variability is in the unit rate of the individual crafts/disciplines (resource).  If this is true, then only one PDF needs to be determined for each resource and applied accordingly - this implies 100% correlation within the craft/discipline and single-resource activities. Selecting curve shape is a bit more subjective.  For me, there are three relevant shapes - betaPERT, triangle, and trigen representing pessimitic, realistic, and optimistic biases on the part of the PDF 'determiner'.  I run all three and present the results with my recommendation."

Which default is selected usually makes a difference of 10% to 15% in the 50% confidence duration.  That's a huge difference.  In a one-year project designed to save one life per week, the Drag Cost, in addition to money, is the 5-7 people who died in the extra time.  And other than a "gut feeling", there is little reason to choose one over another. And we're back to saying that the triangular seems more "right" than the beta for this project because the work is riskier.

I agree that estimating variability is easier in applications (construction, shutdowns, etc.) where there is reliable historical data and benchmarking, than in others (pharma development, defense systems, and new product development in general).  OTOH, despite persuasive efforts by companies marketing high-priced Monte Carlo products, I have never seen evidence that doing estimating at the activity level results in a more "accurate" schedule reserve estimate than by applying a risk factor (10% low risk, 20% medium risk, 30% high risk) for the entire project's schedule, plus further reserve based on (hugely important!) quantified risk/impact identification.

Also, none of this addresses the Monte Carlo treatment of lag.  As I posted before: "I have yet to see a system that works with MS Project that varies lags.  So an activity with duration estimates of 5, 10 and 25 that is an SS+8 predecessor of another activity will be assumed by the Monte Carlo system to be an SS+8 predecessor whether the activity is assumed to be 5, 10 or 25!  The lag of 8 is assumed to be a time lag and therefore fixed at 8, when most lags are volume lags (a distinction that MS Project doesn't recognize)." I'm not certain, but I believe that is true for Pertmaster, also.

"Probalistic schedule risk assessment is not exact/perfect, but it is a giant step in the right direction."

Dennis, I think we may have to agree to disagree.  I'd agree that it's a small step in the right direction -- for dealing with idiots (i.e., customers) who don't recognize that there can be considerable variation in schedule and thus a need for reserve.  People who are not idiots know this, but for the others (who may be a majority!), saying to them: "I ran this through an expensive risk analysis computer package and it said that, to have an 80% chance of finishing on time, we need an extra six weeks.  I mean, it came out of a computah, so it MUST be accurate!"


Fraternally in project management,

Steve the Bajan

Dennis Hanks
User offline. Last seen 8 years 32 weeks ago. Offline
Joined: 17 Apr 2007
Posts: 310

Emily;

Thanks.

Steve;

PERT provides a deterministic value [(O+4R+P)/6]=D.  Probalistic risk assessment (Monte Carlo) will randomly take a value from this range - O to P (PDF) each iteration, then group the results.  Using PERT will only give one outcome, though there is greater confidence in the choice of the activity duration.

Assigning a PDF can be relatively easy, if the underlying assumption is that most variability is in the unit rate of the individual crafts/disciplines (resource).  If this is true, then only one PDF needs to be determined for each resource and applied accordingly - this implies 100% correlation within the craft/discipline and single-resource activities.

Selecting curve shape is a bit more subjective.  For me, there are three relevant shapes - betaPERT, triangle, and trigen representing pessimitic, realistic, and optimistic biases on the part of the PDF 'determiner'.  I run all three and present the results with my recommendation.

Probalistic schedule risk assessment is not exact/perfect, but it is a giant step in the right direction.

 

FYI my assumptions are based on mid-stream and downstream oil and gas projects.

Stephen Devaux
User offline. Last seen 2 weeks 10 hours ago. Offline
Joined: 23 Mar 2005
Posts: 668

Emily, I read the article.  Thanks.  But I should point out that the statement: "the single-point estimate of project completion made by deterministic... PERT" is not quite accurate.  Program Evaluation and Review Technique specifically codified the three-point estimating technique (optimistic, pessimistic and most likely, the basis of Monte Carlo scheduling systems) on the U.S. Navy's Polaris Missile Program back in 1958. 

Schedule Risk Analysis is undoubtedly very important.  Unfortunately, there seems to be an effort to make Shedule Risk Analysis synonymous with Monte Carlo Systems.  Whether such systems really add much at all (other than a "Garbage in, Gospel out!" cocksureness) is unclear. And whether what it adds is worth the extra effort and expense is, in my opinion, highly dubious. Yes, schedule reserve, before project completion and sometimes before merge points, is essential. But whether Monte Carlos provide a more accurate estimate of how much reserve is needed is far from certain.

There have been several discussions of Monte Carlo Systems here on Planning Planet over the years.  This one, late last year, had some very knowledgable input, from Rafael, Vladimir, and others.

http://www.planningplanet.com/forums/schedule-risk-and-schedule-risk-ana...

Here is one of my contributions to that thread:

******************************

"The biggest problems with Monte Carlo systems are:

  1. I have yet to see a system that works with MS Project that varies lags.  So an activity with duration estimates of 5, 10 and 25 that is an SS+8 predecessor of another activity will be assumed by the Monte Carlo system to be an SS+8 predecessor whether the activity is assumed to be 5, 10 or 25!  The lag of 8 is assumed to be a time lag and therefore fixed at 8, when most lags are volume lags (a distinction that MS Project doesn't recognize). (I don't know if there are some software packages that have the ability to probabilistically vary lags -- I've never seen one.)
  2. It is VERY difficult to determine the distribution shape for each activity and input each one independently.  As a result, EVERY scheduler I've ever met (even though the funtionality is there to select from a large menu of distributions, but that's a LOT of work!) uses one of the "default" distibutions (usually triangular, occasionally Beta).  A triangular distribution, with no other variance, will give a schedule that's 12-15% longer than a Beta at the 50% confidence level.  That's a HUGE difference.  Which answer is right, triangular or Beta?  I don't know, and nobody else does either!
  3. And notice, the above problems exist even if the three estimates are based in solid historical data for the exact context (climate, time of year, weather, dependability of suppliers and subcontractors, etc.) of THIS activity!  (And how often do we have that?)

In the West Indies, we have obeah men who will tell you how long your project will take. They're less expensive than Monte Carlo systems, and they'll also throw in for free a potion that'll make your next door neighbour fall in love with you."

*******************

 

Fraternally in project management,

Steve the Bajan