Website Upgrade Incoming - we're working on a new look (and speed!) standby while we finalise the project

Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

Project risk analysis software

34 replies [Last post]
duncan burns
User offline. Last seen 13 years 11 weeks ago. Offline
Joined: 29 Aug 2009
Posts: 16
Groups: None
Having looked into the area of Risk Analysis, the Construction Industry and Expertise; this site keeps cropping up as an authorititive leading voice.

I am currently studying an MSc in Construction project management and trying to research a survey on project risk analysis software with a view to presenting their functionalities in a table form.

I would hope to critically analyse the needs of project risk data and information within the construction industry and comment on the extent that the software surveyed had addressed the needs. Whilst a relatively staightforward task.

I would very much value any comment on this subject you may have and in particular your specific views.

Also which sftware products are considered the leaders with regard to risk analysis.

Many thanks in advance

Replies

Bhavinbhai Lakhani
User offline. Last seen 3 weeks 2 days ago. Offline
Joined: 24 May 2024
Posts: 31
Groups: None

Hello Duncan,

Studying project risk analysis software within the construction industry is indeed a pertinent area, especially for your MSc in Construction Project Management. Here are some insights and recommendations based on your queries:

Understanding Project Risk Analysis Software

Project risk analysis software plays a crucial role in the construction industry by helping project managers and stakeholders identify, assess, and mitigate risks effectively. These software solutions typically offer functionalities such as:

  1. Risk Identification: Tools to identify potential risks based on historical data, project specifics, and external factors.

  2. Risk Assessment: Methods for assessing the likelihood and impact of identified risks on project objectives.

  3. Risk Mitigation: Strategies and simulations to mitigate risks through contingency planning, resource allocation, and scheduling adjustments.

  4. Reporting and Visualization: Dashboards and reports that present risk data in a clear and actionable manner, aiding decision-making.

Leading Software Products

Several software products are recognized leaders in project risk analysis within the construction industry:

  1. Primavera Risk Analysis (formerly Pertmaster): Known for its comprehensive risk analysis capabilities integrated with project scheduling tools.

  2. @Risk: A Monte Carlo simulation software widely used for quantitative risk analysis, helping to assess uncertainties in project outcomes.

  3. Palisade DecisionTools Suite: Offers various tools including @Risk for quantitative risk analysis and DecisionTree for decision analysis.

  4. RiskAMP: Another Monte Carlo simulation tool that integrates with Microsoft Excel, allowing for detailed risk modeling.

 

Thanks,

Bhavin

Kritika Pandey
User offline. Last seen 6 years 25 weeks ago. Offline
Joined: 9 May 2018
Posts: 8
Groups: None

Project risk analysis is a process of defining and analyzing threats and opportunities affecting project schedules. Project risk analysis helps to determine how uncertainties in project task and resources affect project scope, deliverables, cost, duration, and other parameters. Project risk analysis also helps to ranks project tasks and resources based on their risk exposure, calculate overall project risk exposure, and determine the efficiency of risk mitigation and response efforts. Project risk analysis and be quantitative and qualitative. Quantitative project risk analysis is a schedule and cost risk analysis performed using Monte Carlo simulations.

Project risk management is a process of identifying managing, analyzing, and controlling risks affecting projects or portfolio of projects. Identified risks are stored in Risk Register, which is a depository of project risks with their properties. Project management helps to determine what happens with risks during a course of a project, define risk mitigation and response plans and track their execution. Project risk analysis is a part of the project risk management process. Formalized integrated project risk management and risk analysis process help to improve overall project management in an organization.

Evaristus Ujam
User offline. Last seen 12 years 16 weeks ago. Offline

Hi Duncan,

 

Risk analysis is not all about software. It is all about understanding what you are doing. Risk  per se is defined as the probability that an anticipated outcome will not come true. In statistical circle it is the standard deviation of an array of expectations.  In Project management circles, our expected time of project completion will have to be gotten taking into consideration the views of the optimist, the pesimist and the middle of the road man.  I suggest yopu model a project and have experts on PP help you test the efficasy of such a model in tracking how far each element of your arrays of assumptions departs from the expected. This in itself will be a soft ware that can be compared with any of the existing. Yours may even be a better software given that the domain experts available for your use on PP might just be the best you can ever lay hands on.

Wishing you best of Luck.

 

Ujam

Mai Tawfeq
User offline. Last seen 9 years 40 weeks ago. Offline
Joined: 4 Mar 2010
Posts: 96

Hi Dears:

 

For all the interested members in such concept (Risk Analysis Tools), Oracle has provide a much a mazing tool to re-plan and achieve a project goals in many meanings whether cost or time since the quality can be contended in project specifications, it called Oracle Primavera Risk Analysis, this tool can be used in conjunction with MSP, P3, P6, and can be loaded easily from officially Oracle website.

 

Mai  

Barry Fullarton
User offline. Last seen 29 weeks 1 day ago. Offline
Joined: 6 Jan 2009
Posts: 44
Groups: None

How is able to assist with a training program with this so that I can learn how to use its basics? does any one have training manulas for this, best regards

Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Barry Fullarton
User offline. Last seen 29 weeks 1 day ago. Offline
Joined: 6 Jan 2009
Posts: 44
Groups: None
Guys , this is all very interesting stuff , i must admit

However I hope some one can assist me with this
I have the old P3 Monte Carlo. I however work on P6 and don’t have Risk Analyser. However I would like to do P70 or risk schedules on the Higher Levels

We do mainly Higher level schedules rather then the Detailed Contractors schedules. We do however bring them into the Project Schedules as we progress.

Now two things
1. I want to do a Risk on Durations and Cost on the Project Schedule
2. And then doing a Risk schedule on the assessment of the Contractors schedule prior to award and when incorporating into our schedules

I needs to know how to operate the Monte Carlo who has a training module for this software for me to learn the basics and work from there

I understand that one identifies the Activities that have the higher risk and should be categorised and then analysed as risk against a risk register?
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Daniel,

In the absence of Spider SPDM I can go with Gary approach, maybe none is perfect because source data is not perfect as we have seen, but both can provide meaningful statistical approximations and a methodology to determine appropriate level of buffer better than Critical Chain.

The first time I read about Critical Chain the author mentioned 50% of project duration as recommended Buffer amount, my reaction, the author is “nuts” and as of today is still unchanged. Critical Chain does not provides you with a methodology to determine appropriate level of buffer prior to project start nor as changed conditions occur when project moves with time.

The other draw back of the procedure is that intermediate buffers artificially delay activities when some float is consumed and buffers enter into play, this delay is no good for managing your projects, you need to see your schedule without the assumption of these delays because when do not occur activities will be able to occur earlier.

http://www.pmforum.org/library/papers/2008/PDFs/Success_Prob_SDPM_Brazil...

“Critical Chain is the only single sequence that shall not change during project execution. So other paths that enter CC shall include “feeding buffers” that defend CC from the potential change. These feeding buffers may postpone planned dates of CC activities creating the time holes in CC .”

Sorry but I do not buy Critical Chain theory except for a single final buffer, even when not as good as Spider Project SPDM is better than nothing.

I was hoping to be able to use Spider Project that allows you to keep 3 project versions, an optimistic, most probable and a pessimistic. The 3 versions are synchronized and it is recommended to use the optimistic for management and the most probable for contractual or Owner. Because all versions are synchronized by updating one version, the remaining two are automatically updated; it is up to you which version you are to use for updating depending on the side of bed you woke up.

All versions having the same activities with minor variations, just different values for activity and lag durations, the optimistic using lower durations and calendars with no rain days. All would be transparent and with some probabilistic measures and trend values provided. And again once more, yes we all know these are approximate values, but not even the mean is deterministic, an approximation is good, very good.

However, from what I have seen at this forum the mere mention of statistical analysis, even when simplified and of practical application it do scares many, too much for their minds.

Best Regards,
Rafael
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None
Rafael,

I can understand your frustration, especially if you are on the contractor,s side, having to manage and update two versions of programmes of the same project, one internal and one for the client. I also understand the need for it and why they do it.

The NEC3 type of contract (originated in the UK) probably the one you are referring to, encourages collaboration and transparency between the Owner, Project Manager and Contractor. Programme wise you are required to evaluate each activity and add an allowance for unforseen events, more like a buffer and they also give incentives for early completion. Normally, this type of contract are at cost plus a fee. Like any other contract, it has probably strenghts and weaknesses. I do not know much about it as it is still at infancy level here. I need to dig further though. It is currently being experimented here on one Government Contract.

Best regards,

Daniel
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Daniel,

I am not surprised you do not understand why a buffer means a reduction in contract time, neither do I understand why when it should even be mandatory to provide some buffer, but here it is a common contractual requirement for the CPM schedule to use all available time and if early finish is predicted is interpreted as a reduction in contract time if you insist on your schedule projecting an early finish date. Maybe this has to do with our legal interpretation of what a contractual baseline means.

I never had the opportunity to use a buffer and name it "contractor’s buffer" but believe it would be interpreted as a way to fool a contractual requirement for the schedule to use all contractual time. It would also be interpreted as what they call a “float suppression” technique.

The following is a direct quote from a federal job specification.

“Float is the amount of time between the early start date and the late start date, or between the early finish date and the late finish date, of any activity in the project schedule. Total float is defined as the amount of time any given activity or path of activities may be delayed before it will affect the project completion time. Float is not for the exclusive use or benefit of either the Government or the Contractor, but must be used in the best interest of completing the project on time. Extensions of time for performance pertaining to equitable time adjustment will be granted only to the extent that the equitable time adjustment exceeds total float in the activity or path of activities affected by the change.”

A buffer activity is a float reserve for the exclusive use by the Contractor that is in contradiction with the above statement. Onerous and one sided, take it or leave it attitude.

From the following link:

http://www.cohenseglias.com/government-contracts.php?action=view&id=128

“Should a contractor submit a schedule which shows that he will complete in a time frame less than that specified in the contract documents, it is not uncommon for objections to be raised by the Government representative. Unfortunately, all too often, the Government representative will tell the Contractor that he must submit a schedule showing the full contract time.”

Up to now, our experience has been that the Government representative always required us to submit a schedule showing the full contract time, with not a single exception on record.

Kind of stupid, but this is how it works here. We have no other option than to lie and submitt our reseve hidden in some way and use a separate schedule to manage the job.

For your knowledge and information Federal Government contracts are issued in all 50 states and the territories, we are a US territory and I have worked on jobs for the Federal Government with such clauses, among them a Hangar Repair Job at Roosevelt Roads Naval Base (already closed), here we had such clause and the CPM Software was specified by name “Primavera” and also by last name “SureTrak” so it is not by word of mouth but by actual experience. Recently we quoted a remodeling job at the federal courthouse, the specifications called for Primavera Software, again as usual against the federal procurement regulations, ironically a job for the custodians of the law, a Federal Court.

Best Regards,
Rafael
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None
Rafael,

From what I understand, (please correct me if I am wrong) When a contractor submits a tender for a certain project with specified key dates and/or completion dates with LDs attached to it, A good contractor, under normal situation (unless it is a different type of contract) would analyse and indentify all potential contractor’s risk (and opportunities) and price it and include a contingency cost to his total tender amount to cover for such eventualities.

If you agree with the above, then the contractor has already covered his ass for such eventualites. (excuse the language)

I do not understand exactly what kind of contract you have over there in your State that a bufffer means a reduction of contract time. Please clarify? If I was the Owner’s project Manager or Engineer, I will give you all the buffer you want as long as you finish the project on the contractual specified key dates.
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Stephen,

How do you manage risk under State and Federal Government Contracts?
..... Are they allowing for Contractor’s Buffer?
..... Do they allow defining project duration based on a fixed success probability amount? Say 85%.
..... Is the Federal Government still insisting on brand name specification for your CPM Software contrary to the FAR regulations? What happened to the Standard Data Exchange Format Specifications. Why forcing contractors to use P6, if they want to use P6 they can use the standard avilable for P6 an allow the Contractor to use their own software as long as you submitt your schedule in the SDEF format?

http://140.194.76.129/publications/eng-regs/er1-1-11/entire.pdf

http://demo.evanstech.com/Docs/P6_v6.1/English/Technical%20Documentation...

It is well known that project duration is not a deterministic chain of events, not even Lag is deterministic as most Forensic Claims Analyst pretend, but our experiences have been our State and Federal Government conveniently denies so, they insist in not allowing us to identify early finish as a buffer but to mean a reduction in contract time. I perceive the PMI as sold to the big commercial and government interests and do not dare to take controversial and honest side. A simplified deterministic approach would be a good alternative but we are denied of this option. Our experience has been contrary to the English experience where they have protocols and standardized contract forms that allow and encourage for the use of buffers, perhaps we should learn a little bit from their experience. The Russians seems like perhaps ahead in the use of statistical methods as in their major CPM Software (Spider Project) these functionalities are within the same application using simplified methods they highlight it is an approximation, no need to hide it.

For the moment, we have not been able to display buffers and have no other option than to present a schedule different from our true plans, contrary to the stupid requirement for it to be a true representation of our plans; yes, it is a stupid requirement because we are prevented by other clauses to do so, a well know secret by everyone in the industry, unfortunately nobody has ever dared to “hold the Bull by the horns”.

Do not take me wrong, ours is a democratic nation where we can throw some dirty water to our institutions to keep democracy alive, in order to keep it that way we must be vigilant and proactive.

Best Regards,
Rafael
Stephen Devaux
User offline. Last seen 2 weeks 55 min ago. Offline
Joined: 23 Mar 2005
Posts: 668
Wow, I guess I stirred up a bit of an ant’s nest! (But that’s good for a discussion forum, right?)

Actually, there have been a lot of people asking good questions and giving good answers. Gary, I’ll go through your list of points and try to answer: In some cases, I may just quote others, who have done a better job of answering the question than I could.

One quick clarification: I may have given the impression that I don’t believe in risk factor identification, analysis, management and tracking. I absolutely do – the management reserve numbers I suggested would include that data, as well as incorporate additional reserve for the “unknown unknowns” (the number and implications of which is project and management-style specific). My reservations are limited to the value of risk simulation (MC) s/w and three-point estimating.

I’d like to add that MC s/w has experienced huge market expansion in the past two decades, mainly because there are companies making a lot of money from advertising and selling it as some sort of panacea. It’s not. No one is making any money by pointing out the shortcomings of the approach/software, so no one has any vested interest in pointing out that the king has no clothes.

***
I’d written: "There is no evidence that any three-point estimating method should give you better risk control than you’d get from (a) a thorough risk identification and analysis process with deterministic estimates and project-level schedule reserve of 10% for low-risk projects, 20% for medium risk, 30% for high risk, and 40-50% for very high risks. (And sure, 50% won’t always be enough – but you’re not going to get a higher estimate using a 2 SD reserve above the mean of the three-point estimates, either!)"

Gary wrote: "1. Really? No reason to suppose a quantitative approach where you define the time risk associated with each activity will yield more accurate results than a quantitative approach where you decide at the project level it’s high risk so 30% schedule reserve is appropriate? If a detailed bottom-up approach to estimating time and cost yields better results than a bottom-down, why would the same not be true of risk?"

Gary, no, I don’t think it leads to any better results, considering what goes into the approach:

1. Getting three-point estimates means either generating them from the activity managers, or referring to historical data or a commercial database. All this requires costly work.

2. All these sources imply great variation in determination of what constitutes similar work, cherry-picking of data, and/or estimator personality. (Some estimators, when asked to give a pessimistic estimate, will assume some the workers might be a little less productive than the norm; others will assume that a dinosaur may attack the worksite on the same day that a comet full of Martian microbes in Government-auditor uniforms hit it! I honestly know of no way to reliably build in “offsets” for these “personality” variations.)

3. Perhaps worst of all, whatever estimates are generated then often take on a life of their own. If you ask an activity manager for opt, pess and most likely estimates, and he says the 10th if everything goes perfectly, the 15th the way things normally go around here, and if things get really, bad, the 30th, what does he regard as his “drop dead” date? Parkinson’s Law and Goldratt’s Student Syndrome take over: “Work expands to fill time available,” and the student leaves the assignment till the last minute.
I believe in teaching “estimators” to develop one estimate (which I’ll call a 50-50 estimate, even though I’m aware that such a thing may not exist in nature!). If there is to be a second estimate, it would be a resource elasticity, doubled-resource estimated duration (or DRED) estimate: how long if you had double the resources? (Reasonable answers might be about 50%, 60%, 80%, 95%, 100% [not at all resource elastic] or 200% [workers will get in each others’ way.] Then I would use DRAG, DRAG cost and DRED estimates to work with the activity managers and develop a schedule that they will try to achieve because they understand the implications of slipping.

Gary wrote: "I’m just starting to use this, but my intention is to group the project by similar (from a risk point of view) activities, and assign different shapes for each. Probably triangular for most types, but obviously the 3 point spreads will differ."

Gary, if you feel that all this is worthwhile, and that you have a good sense for what the real distribution shape will be, go to it. But please, do not underestimate how important the distribution shape is to what the algorithm generates. Again, run a project on the triangle default and then run it again on the Beta. You’ll see a large difference! Then try it with all activities on the two most extreme optional distributions! Triangular may justify for you a large reserve, but remember, there is a cost to that reserve. (And I’m not sure that many triangles exist in nature, either!)

Gary wrote: "2. Good point about lags. Luckily I try and avoid using them, so shouldn’t be too much of an issue for me."

The distributors/salespeople for the software don’t tell you about the failure to adjust lags, do they? I certainly am a “cautious” believer in the “Testro all FS with no lags approach”. I say cautious because, while the complex dependencies and lags throw in unfortunate complexities (like using them with MC s/w!), I know that many people, if told not use SS’s etc., will simply do everything serially WITHOUT doing the necessary decomposition that MUST accompany an optimized all-FS network. My experience is, if I tell people not to use SS and lags, I wind up with a longer schedule! And that’s very bad! (Remember, the last thing most people do is decompose!)

Gary wrote: "3. If you like the fragnets but don’t like the 3 point estimates, why not use the fragnets and not the 3 point estimates? What extra work do you have to do to get the benefit of fragnets?"

You’re right – fragnets have to be developed anyway: for rework, emergent work, etc. But I do think that running the MC with a probabilistic fragnet (20%), so that the s/w includes it in 1,000 of every 5,000 simulations, adds something that I couldn’t possibly do in my head. A little something, perhaps, but something nevertheless.

Gary wrote: "Final. Why is it not scientific? Or as scientific as scheduling ever is? I must confess I don’t understand the algorithms used at anything other than a basic level, but what’s to understand: It’s a pretty basic iterative process, no?"

I don’t think I could put it any better than Vladimir did when he wrote: “I just want to warn that the results in any case are not accurate. You will be able to estimate necessary contingency reserves but keep manage risks during project life cycle. Initial estimates will change and you shall be ready for this. Simulation accuracy is a myth.”

But unfortunately (or perhaps fortunately, depending on your POV), too many people take it as gospel! It’s a great way of convincing a customer that you need more reserve, or you can’t really pull in a date, because “the risk software says so!” All your other risk analysis, no matter how extensive, will pale in comparison to the software as a method of convincing a customer, who usually doesn’t know grit from granola about project management.

Gary wrote: "If you just say a project is high risk so give it 30% schedule reserve, then any mitigation action you could do to reduce the risk of delaying anything will seem quite pointless unless it has such a massive effect that it shifts the entire project down from high risk to medium. Or am I misunderstanding your approach?"

If you’re misunderstanding, it’s my fault for being unclear, not yours. The 30% (or whatever) reserve would come after the risk factor id and risk management plan. It would incorporate the identified risks, their likelihoods, and any mitigation plans, as well as the “unknown unknowns” that we know are still out there (and the profundity of which varies by project and also needs to be estimated). So it may be that a project which would be estimated as high risk and need 30% reserve would never be seen as such, since we wouldn’t make that estimate till we have done extensive risk management and reduced the risk to a point where it’s estimated to be only medium, and deserving of 20% reserve.

Again, from my point of view, three-point estimates and MC simulations add little to the risk management process other than to give it the “stink” of science. But that science is, if I may paraphrase the first George Bush about the Laffer Curve, “voodoo project management”, and not worth the time and cost.

Or maybe, like Daniel, I’m just an old dog.

Fraternally in project management,

Steve the Bajan
Daniel,
I think that using common sense wise project managers shall create contingency reserves for project schedules and budgets.
Risk simulation software helps to estimate necessary reserves (as I warned these estimates are not precise).
And managing project risks actually means using opportunities and struggling threats to keep some reserves until project finish.
This is common sense that we use applying Success Driven Project Management methodology (http://www.spiderproject.ru/library/SDPM_Canberra2004.pdf).

Best Regards,
Vladimir
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None
Hello Planning Mates,

As I said before, I belong to old school and as the saying goes, there are lots of ways to catching a fish and getting the job done, but there is nothing more satisfying than having analyse the project risks yourself than relying too much on computer simulation techniques.

In anycase, for every risk there is also the opportunity and if you are directly involved with developing the programme then you know exacty the projects strenghts and weaknesses and the mitigation measures you need to implement.

By the way, we are on the same boat with regards to identifying and managing risks throughout the life of a project, that is "a no brainer". I think everybody in the construction industry understands that very well.

The bottom line is, when it comes to risks use your "common sense" rather than the computer. This is my personal opinion though, maybe because I am an old dog

Best regards,

Daniel
Gary,
correlation usually depends on assigned resources - you can overestimate or underestimate their productivity.
Risk estimates are not accurate in any case.
We usually manage large projects consisting of many thousands activities and with limited resources. So each iteration shall mean automatic resource levelling. And this may create huge problems with time and used levelling heuristics. As I wrote earlier Pertmaster heuristics and P6 heuristics are different and thus the results of simulation may be useless.
I may decide to work in three shifts if at some moment I see that the project will be late - it is not risk event consequence.

I have nothing against Pertmaster and risk simulation. I just want to warn that the results in any case are not accurate. You will be able to estimate necessary contingency reserves but keep manage risks during project life cycle. Initial estimates will change and you shall be ready for this. Simulation accuracy is a myth.

Best Regards,
Vladimir
Gary Whitehead
User offline. Last seen 5 years 25 weeks ago. Offline
Valdimir,

Correlation:
Yes, it’s not brilliant, but you can correlate activities, risks (which may impact multiple activities, and/or duration distributions, so you can speed things up a bit.

risk estimates:
I tend to have a ’base’ probability in mind, which is then adjusted by project-specific factors. I don’t have a database of how often endagered species or buried services are discovered, for example, but experienced contractors will have a good idea of the chances of such things happening, which is then informed by site surveys, etc
Consequences of many risks can be very predictable (e.g. if we find an endangered specied on site, we know how long it will take to trap them), for some much less so (e.g. consequences of finding buried services will depend on how many you find and where). For the latter, this is where a 3 point estimate is very useful.

Iterations:
Yes too few iterations and you loose accuracy, too many and you loose time. This will depend on size and complexity of the project, so being able to select the appropriate number of iterations is very important.

Conditional risks:
You can programme conditional risks using Pertmaster, but yes, trying to allow for all possible combinations at the start of the project is impossible. Even so, a lot of benefit can still be gained by modelling the major ones. And the risk model can be updated as the project progresses, much like we do with a schedule.

If there are actions that people will certainly take should a risk occur, then this should be factored into the consequences of that risk. so if you will move to 3 shifts should construction be delayed by planning permission, then the additional cost of that would be the consequence of the risk used in modelling.

Resource levelling:
I’m afraid I never use automatic resource levelling, so I can’t really comment other than to say Pertmaster has a levelling algorithm.


As I previosuly mentioned, I have only just started using Pertmaster, so I cannot claim to have experienced tangible benfits from it’s use. All of the above is just based on my expectations.


Regards,

Gary
Gary,
You can correlate manually. Imagine this process in many thousands activities schedule? Do you know how to correlate?

You shall simulate activity uncertainties and risk events. Did I understand correctly that you have statistics and good estimates on probabilities of project risk events and their impacts? I did not mean activity estimates. This part is easy if the project is typical.

With small number of iterations you will get very approximate result. With large number of iterations you will loose a lot of time.
If the number is small then you will not be able to estimate success probability trends. If probability to achieve the target became 3% lower it does not mean that you discovered the problem because the estimate is not stable and next time may change for the same initial conditions.

If risk events happens the project will be done different way, if two risk events will happen then another way, if we will be late to some moment we will start to use additional resources, at some moment we can decide to work in three shifts, etc. There are two many conditions and actions in the real life. People do not just look when their project moves to disaster.

Unfortunately MC simulates only predefined performance ignoring actions that people will certainly take.

I am curious about Pertmaster - what resource levelling algorithms are used in Monte Carlo simulation? Pertmaster works with different software packages that use different resource levelling algorithms. Does it mean that simulations are done by these packages or by Pertmaster?
If by these packages the result will be different for the same project that was transfered to Pertmaster from P6 and MSP. If Pertmaster uses its own heuristics then the result is almost useless because resources will be allocated different way (basing on the scheduling package algorithms). Do you know the answer to this question?

Best Regards,
Vladimir
Gary Whitehead
User offline. Last seen 5 years 25 weeks ago. Offline
Vladimir:

I’ve only used Pertmaster so far, but:

-It can correlate between activities, though you do have to manually choose which activities

-"Initial data are always of poor quality. Just because the project is unique and reliable statistics is absent." -Project may be unique but will often consist of many typical activities, for which a contractor should have good historical data. Another reason to analyse risks bottom up rather than top down?

-You can select the number of iterations to be used in Pertmaster

-Pertmaster can also employ conditional relationships (the ’fragnets’ Stephen was talking about)

Cheers,

G

Gary Whitehead
User offline. Last seen 5 years 25 weeks ago. Offline
Stephen:
Referencing your paragraphs:

1. Really? No reason to suppose a quantitative approach where you define the time risk associated with each activity will yield more accurate results than a quantitative approach where you decide at the project level it’s high risk so 30% schedule reserve is appropriate? If a detailed bottom-up approach to estimating time and cost yields better results than a bottom-down, why would the same not be true of risk?

2. I’m just starting to use this, but my intention is to group the project by similar (from a risk point of view) activities, and assign different shapes for each. Probably triangular for most types, but obviously the 3 point spreads will differ.

4. Good point about lags. Luckily I try and avoid using them, so shouldn’t be too much of an issue for me.

5. If you like the fragnets but don’t like the 3 point estimates, why not use the fragnets and not the 3 point estimates? What extra work do you have to do to get the benefit of fragnets?

Final. Why is it not scientific? Or as scientific as scheduling ever is? I must confess I don’t understand the algorithms used at anything other than a basic level, but what’s to understand: It’s a pretty basic iterative process, no?

The thing I find most powerful about MC simulations is the cost/benefit analysis you can easily do to understand which of the infinite mitigation activities you could implement are worth doing.
If you just say a project is high risk so give it 30% schedule reserve, then any mitigation action you could do to reduce the risk of delaying anything will seem quite pointless unless it has such a massive effect that it shifts the entire project down from high risk to medium. Or am I misunderstanding your approach?

Cheers,

G
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Stephen:

Maybe because of what you say Spider Project do not rely on Monte Carlo, even when it was and is still under continuous development by a mathematical genius he decided not to use Monte Carlo.

You mentioned the following “There is no evidence that any three-point estimating method should give you better risk control than you’d get from (a) a thorough risk identification and analysis process with deterministic estimates and project-level schedule reserve of 10% for low-risk projects, 20% for medium risk, 30% for high risk, and 40-50% for very high risks.” I believe Spider Project methodology is kind of what you are describing here.

Best Regards,
Rafael
Hi Steve,
thank you for this post - I enjoyed it.

I want to add that activity durations and costs are correlated when different activities use the same resources (they depend on resource productivies and costs) and these correlations are not considered by existing Monte Carlo tools. So the results of simulations are not reliable.

Initial data are always of poor quality. Just because the project is unique and reliable statistics is absent.
And using thousands iterations to get the precise estimate that is based on these data is overkill.

If risk event happens people do something - some corrective actions, etc. So simulation shall be done using conditional relationships and much more complex networks.

Monte Carlo simulation creates an illusion of accuracy.
For above reasons this illusion is wrong and thus dangerous.

Best Regards,
Vladimir
Stephen Devaux
User offline. Last seen 2 weeks 55 min ago. Offline
Joined: 23 Mar 2005
Posts: 668
My, what an interesting discussion -- part PM history, part scheduling theory!

Actually, the first commercially available project management software system was neither Artemis nor Primavera -- it was PROJECT/2, from Project Software & Development, Inc. (PSDI) of Cambridge, MA. (I believe the first microprocessor-based PM S?W was Microplanner, which was introduced for Apple in the early 1980s.)

PROJECT/2 began as a graduate project at MIT in the mid-60s to computerize scheduling. One of the grad students, Bob Daniels, used the research as the basis to start PSDI in Cambridge in the late 60s, and it, along with Artemis, became the only two packages that could be used on DoD programs with C/SCSC requirements.

PROJECT/2, which ran on IBM 360 & 370, and later on VAX, continued to be the s/w of choice at DoD contractors and nuclear power plants until the end of the ’80s. I worked at PSDI in Harvard SQ. from 1987-91. After PSDI’s UNIX-based product flopped in the early ‘90s, PSDI segued into maintenance software with its product MAXIMO. It changed its name to MRO Software, moved to Bedford, MA, and was bought out by IBM about two years ago.

For all its functionality, PROJECT/2 did not have a Monte Carlo simulator until about 1989, when it 9incorporated what it called a Probabilistic Risk Analysis or PRA module.

Someone earlier in the thread asked for thoughts about Monte Carlo simulations for risk analysis, so here are mine:

1. Lots of people think that MC simulations are some kind of magic bullet – I think it’s largely a crock. It has some limited value for resource usage and cost estimation, but for reasons mentioned below, it’s a waste of effort for schedule. There is no evidence that any three-point estimating method should give you better risk control than you’d get from (a) a thorough risk identification and analysis process with deterministic estimates and project-level schedule reserve of 10% for low-risk projects, 20% for medium risk, 30% for high risk, and 40-50% for very high risks. (And sure, 50% won’t always be enough – but you’re not going to get a higher estimate using a 2 SD reserve above the mean of the three-point estimates, either!)

2. I can’t speak for other packages (like Spider), but I have experience with @Risk, Risk+ and Crystal Ball. Their salespeople will tell you how wonderful they are, and show you how you can choose the distribution shape of your three-point estimates from a selection 30 optional shapes! Yeah, sure! How many people ever use those optional shapes for a project larger than 200 tasks? For larger projects, everybody, but everybody, runs the simulation using either the triangular default or the beta distribution default, for every task!

3. Last year at the PMI College of Scheduling Annual Symposium, one presenter was touting the value of one of the packages I mentioned above. I raised my hand and asked how he uses the distribution shapes. As expected, he replied that he just sues one of the defaults. I asked him which one. He replied usually the triangular, but “it really doesn’t make much of a differe3nce.” What ignorance! The triangular distribution will typically give you a 1 SD above the mean schedule (approximately 84% point) that is about 12% - 15% higher than the beta distribution default does. And that usually will make the difference between getting and losing a contract! So as I tell my clients, if you think that the estimate you get from using the Monte Carlo is too high, and that a competitor will submit a lower bid, just switch to the Beta distribution default! Which is closer to correct? I have no bleedin’ idea, and nor does anyone else!

4. None of the packages I mention above is able to “vary” lag amounts in the algorithm: if the lag is input as 8D, that’s what it remains for ALL simulations! So if the three estimates for an activity’s duration are 5D, 10D and 25D, if it has a SS+8D relationship with its successor, it will use the lag as 8D when its modeling the duration as 5D and as 25D! This is almost always absurd: if the lag is 8D when the duration is 10D, surely in the vast majority of cases it won’t also be 8D at 5D AND 8D at 25D! This inability typically results in a much longer schedule. (Now, Spider Project has a feature where lag can be input as either a work amount or a fixed time. I don’t know what it does with work lags in its Monte Carlo simulations – if it varies them, that would certainly be an asset over most other software. I also don’t know what Primavera does. Anyone else know?)

5. The inputting of fragnets IS a nice feature of the Monte Carlo imulation software – it not only reacts to them (“This fragnet has a 20% chance of being needed, so use it in 20% of your simulation runs.”), but causes users to plan their fragnets. That, as I say, is nice – but I hardly think it’s worth it for all the other work and expense that goes into three-point estimation and MC simulations.
The greatest value of three-point Monte Carlo simulations is the ability to say to a customer: “Look at all the work we did to get accuracy in our estimate! We input three estimates for each task! And then, relying on the work of Frederick Gauss and Abraham de Moivre (you know who they are, right, customer?), we ran this computer simulation 20,000 times and, as you can see, to have an 80% chance of finishing on time, we need a schedule reserve of 47.28 days. And this has got to be accurate because, after all, it came out of a computah!”

And customers who don’t understand the theoretical basis and algorithms underlying the estimate are frequently bamboozled into thinking this is all very scientific. It isn’t. earlier in the thread, someone mentioned “Garbage in, garbage out!” Unfortunately, with Monte Carlo simulations, it’s seen as: “Garbage in, Gospel out!”

Fraternally in project management,

Steve the Bajan
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None
Hi Raphael,

I did not mean to embarassed you with your data, but just to make it clear. I said personal computers PC, which were introuced by IBM in mid 80’s. Artemis may have started in 1976 in the USA (used by NASA) but was not commerically used until the 80’s The same thing goes for Monte Carlo, the method may have been available in 1946, but it does not mean that Monte Carlo Software was already available at that time.

I did not come from China though, In fact, I worked with one of the leading US contruction comapanies in the early 80’s

Cheers mate,
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Daniel,

Maybe in the 70”s there were no computers in China but everywhere else there were computers. In the 70’s all colleges, except in China, would have a computer center available for the use of the students. At this time, you would use punch cards for data entry and in some colleges; you would keep your punch cards on a locker room near the computer center. I remember in my student days we used at our colleague a couple of PDP10s and an IBM 360 and then a DEC 10, as Digital established some manufacturing facilities close to town. Then I moved in 1976 to Cambridge MA, where we had I believe was an IBM 370.

http://en.wikipedia.org/wiki/PDP-10
http://en.wikipedia.org/wiki/IBM_System/370

Computer simulation was so common that even the already obsolete Space Shuttle flights were simulated in the 70’s using dumb software by one of my roommates who worked at this time for Intermetrics a Boston based engineering consulting firm. Yes here we love dumb software.

http://en.wikipedia.org/wiki/Intermetrics

Artemis and other software where available at the time for these computers as you can find from the following Artemis Software site.

Artemis

Monte Carlo is a simulation method not a software, it might be that some software using the name came out in the 90’s but by that time it have been in use for decades.

Monte Carlo

Be informed I am not used to lying or to create fantasy.

Best Regards,
Rafael
All risks shall be considered and managed. not just analyzed at the tender stage.
All risks may have complex impact (schedule, cost, scope, quality).
Best Regards,
Vladimir
Gary Whitehead
User offline. Last seen 5 years 25 weeks ago. Offline
Hi Daniel,

My view is that schedule risks should be treated the same as cost risks.

That is identified in a risk register, mitigations agreed, and reviewed monthly throughout the life of the project until they disapear.

This is especially true of projects with significant LDs and/or mandatory end dates, which admittidly is vast majority of the projects I work on, so my view may be skewed.

You don’t have to use Schedule risk software like Pertmaster for this, but it cetainly makes the anlysis much more powerful and less subjective.

It’s a real eye-opener for a project team when you can demonstrate they have <1% chance of hitting the end date P6 generates unless they do something about their schedule risks.

I’ve only started using Pertmaster in the last few months, but as you can probably tell I am a definite convert!

Cheers,

G
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None

Hi Gary,

Maybe I am getting older, and belong to old school, but anyway, using P6 is good enough for me to filter down potential risks.

In additon, schedule risk analysis should be done during the tender stage and should not involved to many activities.

Best regards,

Daniel
Gary Whitehead
User offline. Last seen 5 years 25 weeks ago. Offline
Hi Daniel,

If you know what the 100 highest (schedule and time) risk activities are, and what the cost / benefit for mitigation activities will be, and the 80% confidence level completion date before & after mitigation on a 10,000 activity programme without using something like Pertmaster, you’re a better man than me.

For those of us who can’t, there is a clear benefit in using this dumb software.
Daniel Limson
User offline. Last seen 5 years 18 weeks ago. Offline
Joined: 13 Oct 2001
Posts: 318
Groups: None
Hi Rafael,

In my student days 31 years ago, there were no softwares nor personal computers available yet..maybe you mean mid 80’s. Softwares like Artemis System started popping up mid 80’s and primavera late 80’s (DOS base) Monte Carlo came to the market in the ninties. Anyway, if you are responsible for creating the programme, you must know exactly where your programme is weak and can easily identify where the potential risks are. You do not need a dumb software to analyse a programme. It is afterall garbage in and garbage out.

Cheers mate
Rafael Davila
User offline. Last seen 1 week 5 days ago. Offline
Joined: 1 Mar 2004
Posts: 5241
Duncan

As a graduate student you already must know in the past when PERT was in its first stages it was wrong as it was missing from the computation long duration activities, not in the Critical Path but close.

I am not sure all new software got it right, it would be good to explore and comment on the validity of the algorithms.

In my student days, 31 years ago it was said that the only way to conform CPM logic with the statistical analysis was through Monte Carlo Simulation. Maybe there are new valid techniques and a few wrong still moving around.

Best regards,
Rafael
Oliver Melling
User offline. Last seen 5 years 9 weeks ago. Offline
Joined: 24 Apr 2007
Posts: 595
Groups: The GrapeVine
Duncan,

Have a look for information on Pertmaster Risk Expert, @trisk and Monte Carlo, these are the main schedule risk analysis tools in use in the UK.
duncan burns
User offline. Last seen 13 years 11 weeks ago. Offline
Joined: 29 Aug 2009
Posts: 16
Groups: None
Vlad,

Many thanks for your response I will take a lok at the link.

Thanks Again.

Duncan
Hi Dunkan,
look at http://www.spiderproject.ru/library/SDPM_Canberra2004.pdf
Methods described in this presentation are implemented in Spider Project software widely used for construction management in Eastern Europe. Write to me if you will need additional information.
Best Regards,
Vladimir