Guild of Project Controls: Compendium | Roles | Assessment | Certifications | Membership

Tips on using this forum..

(1) Explain your problem, don't simply post "This isn't working". What were you doing when you faced the problem? What have you tried to resolve - did you look for a solution using "Search" ? Has it happened just once or several times?

(2) It's also good to get feedback when a solution is found, return to the original post to explain how it was resolved so that more people can also use the results.

Sciforma: a nice product, but I believe there is a huge deficiency in CPM calculations

5 replies [Last post]
Stephen Devaux
User offline. Last seen 30 weeks 6 days ago. Offline
Joined: 23 Mar 2005
Posts: 667

I haven't looked at Sciforma in several years, but when last I did, both it and its Scitor PS predecessors were very nice products.  Out-of-the-box, I definitely think it is a better product than MS Project. Of course, so too were Microplanner and several other competitors in that market niche.

However, comparing any other standalone to MS Project is a bit deceiving, as the market size of MS Project dictates that there are a VAST number of add-ons written and marketed for MS Project that are not available to the users of other software. That means that a product like Sciforma has to do all its own development, and where it doesn't there will be deficiencies.

A fundamental one is in the CPM computation algorithm. Now, MSP certainly has its deficiencies in this field, too -- think about the failure to allow two dependency relationships (SS and FF) between two tasks! But that is something for which there is a simple workaround through milestones. However, and please correct me if I am wrong, whereas there is an add-on (from Sumatra.com) to MSP that computes critical path drag, Sciforma does not currently calculate drag.

Now, it's easy to say that this is not a big deal, as the only other package (as far as I know) currently available that does compute drag is Spider Project -- Asta, Open Plan, etc. currently do not. I remember 25 years ago when I worked for the company that made Qwiknet Professional, we made light of the fact that it didn't compute free float. Well, it should have, and our competitors made us bleed purple because of that deficiency.

But critical path drag is NOT free float! First, free float is, like total float, OFF the critical path, and second, the vast majority of PM software users don't even know what free float is, what its implications are, and how to compute it! (They should, but they don't.)

Conversely, and as the name implies, critical path drag is a metric that is ALWAYS on the critical path. And if you will pardon my saying so, although the calculations are related, drag is therefore more -- um -- critical than either free OR total float. To claim that a software package does critical path method if it does not compute the critical path drag is close to false advertising ("Our spreadsheet software has great functionality, only it doesn't compute multiplication!"), and will be some day soon.

Just to give you an idea, here is the abstract for a presentation (EVM-1389) being given this coming June in Washington D.C. at the Annual Meeting of AACE International (Association for the Advancement of Cost Engineering):

(EVM-1389) Critical Path Drag Cost for Corrective Action Development
Primary Author: Ms Leah V Zimmerman CCC EVP PSP Green Manor Group LLC

Abstract: Project Managers design corrective action plans by careful consideration of alternative approaches, often this process lacks a quantitative and disciplined method. Identification and focus on the project schedule activities that provide the greatest reward for the corrective action can improve the chances of the project meeting the technical objectives on time and within budget. The concepts of critical path drag, resource elasticity, expected monetary value and technical debt will be explored and explained. A quantitatively driven, corrective action decision-tree and process model will be proposed and the use of the model will be explored to improve project outcomes through better corrective action plans.

I have never met nor spoken to this person, though I'd like to and hope to in the future! But the fact is that someone whom I've never met is speaking at a D.C. Annual Meeting that will undoubtedly be attended by hundreds or thousands of Department of Defense cost engineers. The presentation is in the Earned value management track, on the subject of "critical path drag, resource elasticity, expected monetary value and technical debt" . This all indicates not only that these TPC topics are now "out there" in the PM community, but also that people are recognizing their importance and extending them not only to cost (which drag cost already does) but also to EVM.

I've been pushing the rock up this hill for many years now. Needless to say, I'm pleased...

Vladimir, I don't know if Spider was planning to have a presence at that meeting, but you might want to consider it...

Fraternally in project management,

Steve the Bajan 

Replies

Stephen Devaux
User offline. Last seen 30 weeks 6 days ago. Offline
Joined: 23 Mar 2005
Posts: 667

"Fortunately Spider development team is not fully contaminated by western models..."

Rafael,

You might very well say that (twice, even!), but I couldn't possibly comment!  ;-)

As to Drag values changing, absolutely! Whenever the critical path changes, it is mandatory to:

1. Re-compute the Drag values of the new critical path tasks.

2. Re-compute the Drag Cost values of the new critical path tasks!

3. Determine the value-added of each optional task on the new CP to ensure its value is not less than its True Cost (True Cost = resource cost + Drag cost).

Projects are performed every day with optional activities that were worthwhile performing when they were OFF the critical path, but that should be jettisoned if their Drag makes their True Cost greater than their value-added.

Fraternally in project management,

Steve the Bajan

 

Rafael Davila
User offline. Last seen 2 weeks 4 days ago. Offline
Joined: 1 Mar 2004
Posts: 5230

Stephen,

Spider Monte Carlo is in its infancy so we shall wait until Vladimir disclose the details on how to use it. Also much of additional and unique functionality might still be on the work. Fortunately Spider development team is not fully contaminated by western models that for some calculations are wrong (resource critical float, negative float and a few others), so wrong the users go over and over the errors without even noticing.

I believe Vladimir recognizes the limitations of Monte Carlo for many reasons, some of them the ones you have mentioned. Still he mentions how valuable it is, maybe quantitatively it is not precise because of the data source but in qualitative/relative terms it gives you much value. I am a fan of statistical methods even in the case we can never get precise statistics, single valued analysis is no better than an approximate statistical analysis, single valued analysis is less.

I do not know if you ever read the book "The Flaw of Averages" and the story about the statistician who drowned because the average depth of the river was 3ft, he did not drawn because of a "black swan event" a 1,000 ft deep hole. The book warns about how wrong in many cases using only the averages is wrong.

  • The same thing happens in scheduling, it can even happen that the most possible critical path is not the one you get by imputing average values into a deterministic model. Probabilities of near critical activities might be relevant.
  • The same might happen to DRAG values, if the critical path changes they will change.

Best regards,

Rafael

Rafael Davila
User offline. Last seen 2 weeks 4 days ago. Offline
Joined: 1 Mar 2004
Posts: 5230

Oops clicked twice the save button.

Stephen Devaux
User offline. Last seen 30 weeks 6 days ago. Offline
Joined: 23 Mar 2005
Posts: 667

Hi, Rafael.

When you say "your software", I assume you don't mean MY software because I certainly don't have any! I agree with what you say about free float, both CPM and resource-leveled -- it is relevant (even if most schedulers don't understand it), and the old Qwiknet Professional product most certainly should have computed it.

"It will be interesting to see how Drag distribution comes out and how best to make use of this information."

As to Spider's new Monte Carlo functionality, I'm VERY eager to find out what it has to say about Drag distribution, and what YOU have to say about how best to use the info. Will it compute a distribution for Drag Cost, too? I wish you'd publish an article about it -- there is no one better equipped to do so.

As you may remember, I am NOT a huge fan of Monte Carlos, for several reasons:

1. People don't know the fundamentals behind them enough to use them with the necessary rigor (e.g., running 5,000 or more activities ALL on a default Beta distribution!).

2. Using estimates that are no better than guesstimates (as opposed to having the sort of historical data and benchmarks that are commonplace in construction).

3. Garbage in, Gospel out: inputting guesstimates, running them on a default distribution, and then believing whatever you get.

4. Black swans.

That said, if planners use the Monte Carlo with sufficient rigor, it can add value.  (And I am very curious about what you report on the Drag distribution!)

Fraternally in project management,

Steve the Bajan

Rafael Davila
User offline. Last seen 2 weeks 4 days ago. Offline
Joined: 1 Mar 2004
Posts: 5230

Stephen,

Anything that is relevant to time or monetary cost matters, the slope of the cost curve matters. Spider recently added Monte Carlo to its probabilistic methods, 3 scenario approach is still available. It will be interesting to see how Drag distribution comes out and how best to make use of this information.

Regarding your mentioning of free float I would like to highlight that as well as there is a resource critical total float there is a resource critical free float. Because many software computations fail to always display correct values for resource critical float I suspect their values for resource leveled free float be wrong. In the case of Monte Carlo runs they might get thousands wrong values.

The following sample job might be too simple to rule out the possibility that your software is capable of always disclosing correct values for reslurce leveld free float but scince a single occurrence is enough to prove it, in case your software get it wrong it will be enough not to trust on the float values for a resource leveled job your software display.

 photo ff01_zpsb868dfff.png

 

 photo ff02_zps253bcaf1.png

By the way I do not believe there is such thing as always free float, consumption of free float even if on the deterministic schedule shows no impact it might reduce your probabilities of success if you perform a Monte Carlo run.

Best regards,

Rafael