Rauch & Hackenberg, LLC

Organizational Change Management Consulting

 

    Team Involvement in Estimation is a Foundation to a

                                  Successful Project

                                                    Bill Hackenberg

                                                          Abstract

Accurate estimation is the foundation of a successful project. Active involvement of the project team in estimation not only improves accuracy but also fosters commitment that leads to successful on-time completion.

This paper discusses several techniques to arrive at accurate estimations for software development projects. In each case, team involvement is central to the project management process. This starts with training in simplified Function Point analysis and PERT duration calculations (Program Evaluation and Review Technique). As estimation proceeds, process improvement tasks are added to help achieve the long-range goals of the organization along with the short-term goals of the project. Team review cycles are in-depth. As a result, final estimates are more accurate. Outputs from the estimation process directly lead to the formation of the schedule.

There is an intangible deliverable from correct estimation. This is team involvement and buy-in. This will serve the project manager well as work progresses.

                         Project Management Body of Knowledge Areas:

Project Time Management (Activity Duration estimating)

 

Introduction

When was the last time a business sponsor handed you a project without an implied completion date? If you’ve been working for the same companies I have, that would be never. There is always a market driven target date that is held out early in the process and described as a, "Drop dead," or, "Must have," date.

Within this all too typical context, the project manager is then asked to provide a, "project plan." In the business sponsor’s mind this means only the schedule (and not the other items in a project plan such as the project charter, roles and responsibilities, and risk analysis). "Can I please have that before the end of the week?" It’s already Tuesday and the project manager now has a choice.

Do I quickly draft a schedule by backing off from the target date and filling in intermediate milestones? This will give the business sponsor a quick feel for what the schedule will need to look like to meet the target date. But, even if it is labeled ***Draft*** it will cause problems later. It is amazing how draft dates can morph into real deadlines outside of your control. You haven’t really helped the business sponsor by just telling him or her what they want to hear with no regard to accuracy.

Putting together a schedule requires an understanding of the amount of work and the capabilities of the team. These are the basic building blocks of estimation. To truly help the business sponsor you must follow proper estimation methodology right from the start. It is also imperative that you involve the development team.

This is true even though it will probably take longer than Friday. Do I want to keep my job? Of course. But, if you want to have a successful project you’ve got to start with an optimal process that will provide accurate information and start to foster team buy-in. Now you are demonstrating your worth.

While, it is true that a fair amount of schedule input comes from a project manager’s own expertise, you must also include the people who will be doing the work. This is required to generate accurate information. It is also a very important component of the team building process. It is difficult enough to keep a large project on track without having also alienated your team by excluding them from the estimation process.

It is imperative that the project manager understands a successful project is based on a foundation of team involvement. You must get your team off on the right start and what better place to begin than estimation.

Activity duration estimation

The Project Management Body of Knowledge PMBOK(1) identifies the process of estimation in the section on Activity Duration Estimation (Section 6.3). In a nutshell, bottom-up duration estimation involves projecting the effort required and dividing by the number of resources available. For instance, a task estimated to require 2 person-months of effort would have a duration one calendar month if we put two people on it. After tempering the math with expert judgment and activity sequence information you arrive at a schedule.

It is interesting to note that the PMBOK says, "The person or group on the project team who is most familiar with the nature of a specific activity should make, or at least approve, the estimate." Most of the time this will be the people doing the work. However, it is not enough to simply ask them how long it will take. This approach leads to notoriously low estimates. An interesting observation of human nature is how we all underestimate how long it takes to do things. Many of us are constantly late for everything. You worsen this effect when you consider an entire group. This is why it is counterintuitive to consider the best approach towards accurate estimation is to involve your team. To get there you can’t rely on the way human beings typically arrive at their guesses.

A better approach to estimation is to provide a technique to individuals to assist them to make accurate estimates. By guiding the estimator through the correct analytical thought process you increase the accuracy of their estimate. This requires some documentation, training, and forms to fill in.

The Five Levels of Software Process Maturity

The Capability Maturity Model® for Software (CMM® or SW-CMM®) provides software organizations with guidance on how to gain control of their processes for developing software. Continuous process improvement is based on small, evolutionary steps. A maturity level is a well-defined plateau toward achieving a mature software process. It is very hard to improve software process within an older software shop. It takes years to move up just one level. There are published cases of organizations expending significant funds and time to move up one level in 18 months, only to slip back to the prior level two years later.

The Software Engineering Institute (SEI) was established in 1984 by Congress as a federally funded research and development center with a broad charter to address the transition of software engineering technology. The SEI is sponsored by the U.S. Department of Defense (DoD) and is an integral component of Carnegie Mellon University. The SEI developed the CMM.

The following is from the SW-CMM, Version 1.1(2):

"The Capability Maturity Model for Software provides software organizations with guidance on how to gain control of their processes for developing and maintaining software and how to evolve toward a culture of software engineering and management excellence.

Continuous process improvement is based on many small, evolutionary steps rather than revolutionary innovations.

A maturity level is a well-defined evolutionary plateau toward achieving a mature software process. A maturity level identifies the maturity of the software processes of an organization.














                      Figure 1: Maturity Levels of Software Development Organizations

The following characterizations of the five maturity levels highlight the primary process changes made at each level:

1) Initial The software process is characterized as ad hoc, and occasionally even chaotic. Few processes are defined, and success depends on individual effort.

2) Repeatable Basic project management processes are established to track cost, schedule, and functionality. The necessary process discipline is in place to repeat earlier successes on projects with similar applications.

3) Defined The software process for both management and engineering activities is documented, standardized, and integrated into a standard software process for the organization. All projects use an approved, tailored version of the organization's standard software process for developing and maintaining software.

4) Managed Detailed measures of the software process and product quality are collected. Both the software process and products are quantitatively understood and controlled."

5) Optimizing Continuous process improvement is enabled by quantitative feedback from the process and from piloting innovative ideas and technologies."

Published Maturity Levels of Software Organizations

A software organization determines its maturity level through a process of self-appraisal. The SEI does not certify organizations at maturity levels. In some instances SEI participates in a self-appraisal, but the SEI does not confirm the accuracy of the maturity levels published by organizations after they have completed a self-appraisal.

The number of high maturity organizations has grown steadily over the last decade, and dramatically in the last two years. When the first profile of maturity levels was published in 1992, no organizations had been assessed as Level 4 and only one organization, IBM’s Onboard Shuttle, had been evaluated as Level 5 using the software capability evaluation method. At the time of the 1999 survey, 40 organizations were known to have been appraised at Maturity Level 4 and 21 at Level 5.

Here are the current published maturity levels of software organizations as of November 28, 2001:

Level 2: 42 (17%)
Level 3: 91 (37%)
Level 4: 47 (19%)
Level 5: 68 (27%) (In the U.S. this includes divisions of Boeing, CSC, IBM, Lockheed Martin, Motorola, NASA, Raytheon, and the U.S. military)

While the number of higher maturity organizations is growing rapidly, 68 is still a fairly small sample and should not be over-interpreted. It is interesting to note that fully 38 of 68 organizations currently appraised at Level 5 are in India. The conventional thinking on this phenomenon is that these software shops are new and had no legacy ad hoc processes to unlearn.

 

Typical Software Development Life Cycle

A typical software development life cycle looks like this. This would be typical for a software development shop functioning at the SEI CMM Level 1 or 2:






                                    Figure 1: Typical Software Development Process

In a typical development life cycle, design is too short or non-existent. Design is not documented. Coding takes place too early. Much code is rewritten or thrown away as design flaws emerge. Development unit testing, where subsets of code are checked out individually, typically does not take place at all. System testing, where the entire system is checked out for the first time, is the first part of Quality Assurance and performed by the QA team. The QA cycle is too long as the team attempts to "test in" quality to the system. The Development and QA teams do not intermingle and each remains in their side of the building.

 

Optimal Software Development Life Cycle

What we are trying to do is move to a more optimal life cycle. This would be expected from a software development shop functioning at SEI CMM Level 3, 4 or 5:








                                       Figure 2: Optimal Software Development Process

In an optimal life cycle we spend much more time in design. Design is formal and documented. Design documents are reviewed. Coding is quick and a direct offshoot of good design. Development unit test is thorough and checks out all normal and error paths. System test is lead by Development and is less problematic because everything has already been checked out in Unit Test. QA test is a formality and goes very quickly. QA verifies the quality that has been built into the system in the earlier phases. The Development and QA teams are fully integrated and each is totally involved in all phases, i.e. QA participates in design reviews and code walkthroughs, Development reviews QA test cases, etc. As a result of a better software development process, the overall length of the schedule is shortened.

Inputs to Estimation

The activity duration estimation process takes, as it’s major piece of input the activity list. The activity list is itself derived from the work breakdown structure that has been refined during the process of activity definition and activity sequencing. These processes are very important to the generation of the schedule, but we will not go into detail here.

Standardized Activity Checklist

A great help to the eventual process of estimation is a standardized activity checklist. The standardized activity checklist contains typical activities from like projects. This approach assures that you will consider everything for your project and not omit anything.

The following activities are typically omitted from software development projects.

Write & review design documentation

Programmers will tend to write their code too soon and need encouragement to do formal design beforehand. A major trend in software engineering is object oriented analysis and design. OOA/D leads to formal documented design. It is much more effective when you use a standardized design language called UML Unified Modeling Language. There are good CASE (Computer Aided Software Engineering) tools on the market, most notably Rational Rose, that automatically create UML diagrams. The UML creates a standard set of design documents using OO formats. An important part of this process is review cycles. By making sure that these formal design and review cycles are in the WBS, these tasks will get estimated and have a higher chance of actually being completed. This might sound like an over-simplification, but a very important phenomenon of the estimation process to include tasks related to improvements in the software engineering process that you want to accomplish. If you estimate these tasks they might actually get done. The major reason good software engineering typically does not get done is that when the developer is asked "how long" it will take; they do not consider these aspects of good engineering.

Code walkthroughs

Code walkthroughs are formal reviews of newly written source code done before any testing takes place. The goal is to review a percentage of the total code (typically 10%). This is because code walkthroughs are time consuming and their effect waters down to more you use it. Techniques are used to pick which methods (OO software routines) will be reviewed. Typical considerations are those bass classes (root objects) that will be inherited the most, or methods that are thought to be the most problematic. Typically training is necessary to help code walkthroughs be productive. There are techniques that help reviewers to focus on potential design flaws or other coding practices that can cause problems during testing. Minor style considerations should not take up too much time in a code walkthrough, but programmers will typically dwell unnecessarily here. This training also sets expectations as to what need to be done and raises the accuracy of the estimates.

Write & review test cases

Test cases are done throughout the development and QA activities. Typically they are not attempted until the QA cycle and are performed by the QA team. A better approach is to have the development team at least start the process of documenting test cases. These are then reviewed and used for development unit testing and development system testing activities. The test cases are taken over by the QA team who expand them to be more thorough.

Development unit test

This is the single biggest impact activity that developers could do but don’t to improve the quality of the software engineering process. The reason development unit testing does not get done is that it requires major amounts of test harness code to be written. Code must be subdivided into units that can be surrounded by other code (the test harness) that simulates the rest of the system. The test harness is "throwaway" code, because later in the development process as the packages are integrated there is no longer a need to test packages individually. Nonetheless, development unit testing is very important because it catches most of your bugs early in the process. A significant advance in the field of software engineering is the best CASE tools, once again Rational Rose, can now auto-generate development unit test harnesses for you. This is a new area in software engineering and may require significant strategizing with the development team to consider how it will be done. This strategy is then used to actually make the estimates.

Development system integration

Is performed in a "staging area" distinct from the QA test bed. It is the first check through of the entire system against test databases and interfaces. Development should perform system integration testing before handoff to QA to assure that the software comes up, can perform basic paths through he UML use cases, and what ever other criteria QA has identified for readiness to handoff.

The standardized activity checklist is the single biggest tool to improve your software development process because it allows you to inject the necessary tasks before estimation commences. It is amazing how you can get whatever you want done if it has been estimated first and included in the schedule.

Formal Estimation Technique

The basic technique is to help people make more accurate duration estimates to separate size estimates from effort estimates. All too often we jump to the duration question first. Such as the classic, "How long will this take?" By first asking a size question, "How large is it?" and then the effort question, "How long would it take you to do this if you had nothing else to do?" you take a better first step towards accuracy.

Use a formal estimation technique that separates size estimation from effort estimation

Bottom-up Size
Bottom-up Effort
Compare and level-set for all tasks
Top-down comparison with other projects


Estimate Size (Bottom-up)

Size is a function of many considerations and is specific to the work you are doing. Function Point Analysis(3) is a bottom-up approach to determining the size of the work to be performed. A "function point" refers to a discrete unit of size. In a traditional FP analysis, all aspects of the projected software application’s size are measured. Depending on the type of function point, i.e. a line of code, data type, or message type, standard weighting factors are applied. The particular software language being used also has a specific weighting factor that is applied. Some types of function points are difficult to project, such as the number of embedded SQL calls in a database application that is very dependent on a schema that might not be known yet. FP’s work well in predictable situations such as a package implementation, e.g. SAP, BAAN, where there is significant available historical data to draw upon.

Unfortunately for new software development, a full FP analysis is difficult to implement. The effort required to gather all the required metrics does not necessarily improve your accuracy. Also, you may loose the interest of your team in the tedious process of gathering all the information and applying the requisite weighting factors.

A better approach for new software development is a simplified Function Point analysis that is simpler and customized to the specific you are going to do. Start by asking the experts on your team what are the components of size for the task at hand. For instance, Java work will involve objects, methods (code), attributes (data types), and Record types (database). Refine your size question to be, "How many objects will this require." "How many methods," and so on. By deferring the estimator from the effort question you help them to fully consider size.

Example: Java Size Elements (JSE), estimate:

The number of Objects
The number of Methods (code routines) for each Object
The lines of code for each Method
The number of Attributes (data types) for each Object
The lines of code for each Attributes
The number of Record Types (database)
The number of Data Types for each Record Type


Example: HTML Size Elements (HSE’s), estimate the number of:

The number of HTML pages
The number of words of Text for each page
The number of Images on each page
The number of Input fields on each page
The lines of JavaScript code for each page


Estimate Effort (Bottom-up)

In a traditional full Function Point Analysis the use of standard weighting factors and other constants allows the effort for the project to be directly calculated. There are also cost models such as COCOMO(4) (COnstructive COst MOdel) that can be used to automatically calculate effort. These approaches should only be used in predictable situations such as a package implementation, e.g. SAP, BAAN, but not for new software development.

The thrust of this paper is that the accuracy of estimation for new software is more based on the team’s expertise, i.e. you keep guiding the team and asking them questions. Estimating effort becomes an extension of estimation size.

For new software development the simplified Function Point Analysis does not directly calculate effort. Even if you happen to have reasonably reliable weighting factors, do not attempt to directly calculate effort. Rather, present the output from your effort calculations to the estimation team as a data point for their consideration. The source of data for effort estimation will continue to be the team.

Once size estimation is complete you ask bottom-up, "how long," questions. With the analytic thought process for size still in the mind of the estimator, you get a more accurate response to the effort question. This is the basic technique proposed here, to create a process and a culture for accurate estimation to take place.

There are three types of estimators:

Always underestimate (best case)
Always overestimate (worst case)
Unpredictable

It is the unpredictable estimator that will cause you the most problems

When asking for effort estimates, relate this phenomenon and ask what type the estimator is.

Even though full PERT is seldom used today, the PERT weighted average is still useful because it clarifies the difference between best and worst case for the estimator and helps them to be more consistent and accurate.

PERT Weighted Average =

(Optimistic + 4 x Most Likely + Pessimistic)/6

Compare for all Tasks

At this point in the estimation process it is very important to "level set" the data. What this means is to establish various checkpoint ratios to be used for comparison. In some instances revisions may be made after consultation with the estimators.

Create a size to effort ratio for each estimator

Size/effort ratio StoE = Sum(all size estimates) / Sum(all effort estimates)

Compare for all tasks from each estimator

StoE data is then used to provide relative magnitudes for each estimate. These magnitudes are then evaluated for significant deviation from the mean. For those estimates that are more than one standard deviation away from the mean, refer to the estimator for possible revision. For instance, an estimator has provided 10 estimates. All are within 25% of the mean, except for one that is double the mean. For this item, a discussion should be held with the estimator to relate this information. The estimator might have a very good reason for the deviation. This information can then be used as appropriate to perhaps revise other estimates for similar items. Or perhaps, there is no reason for the deviation and the particular item needs to be revised. Either way the process is very much requiring involvement of the estimator and is respectful of their expertise. The information that the calculations provide is only information for the estimator to consider and perhaps use to make revisions. In no cases should revisions be made without the estimator's approval.

Compare for all estimators

The StoE ratios for each estimator are then compared. This may reveal who the best-case estimators are and who the worst-case estimators are. Once again the information is related to the estimator for possible explanation and revisions either to that estimator's data or others as appropriate.

Review with experts and managers

Now you are ready to solicit additional input including expert opinion and managerial review. It is very important to hold these review cycles as the estimation process is proceeding to gather as much relevant input as possible. These reviews will further increase your accuracy and bring the additional project shareholders into the process.

Do not automatically level set the data

A common mistake for the project manager is to undermine the process by arbitrarily changing the data using unreliable weighting factors. This will lower the accuracy for several reasons. Most notably it will lower the dedication of those supplying estimates. Developers are so used to having their estimates ignored that it is very difficult to once again build their trust into the project. The importance of this dynamic cannot be overstated. At all time respect the process and respect your developers. You can educate them and help them to be more accurate, but do not override them. In some organizations this is nothing short of a cultural upheaval and expect there to be many discussions not only with the developers but also with the management team.

Try to understand the reasons for the differences

In almost all cases there are reasons for variation between estimates. This variation can be for a particular task or an estimator. It is much better to understand this variation then to override it for the sake of consistency.

If the estimators want to revise, then do so

When presented with conflicting data estimators may still choose to stick with their original estimates. This is Ok and should be respected.

Compare with Previous Releases

This is also referred to as analogous estimating or top-down estimation. Please note we are only now adopting this approach in the estimation process for validation and possible adjustment. In the typical process, this is the only form of estimation performed. By itself it is not too reliable, but when used in conjunction with bottom-up processes is results in further refinement of accuracy.

Tap company experts to compare with past projects

It is very important to get an apple to apples comparison. As best you can, use various adjustments and weighting factors to historical data to make it as close to the project at hand as possible.

It is very important that actuals are used from the historical project and not the original estimates. You would be surprised how often actuals are not available. If this is the case it is definitely worth it to go back into the timecards database and attempt to reconstruct the data. The actuals can then be used to compute how accurate the original estimates were. This ratio can than be considered for application to the current data.

For instance, "Project B which we completed two years ago is about 2 and a half times this project." In this example a very simple division by 2.5 brings the numbers into alignment.

Another possible adjustment might be that a prior project was done using an object oriented language and many of the base classes do not need to be rewritten and can be inherited by the new code. In this example reuse weighting factors are applied to adjust the historical data down.

Works best when previous projects are largely similar to current project

If it turns out that the prior project is too dissimilar to the current project, disregard the data. This is a difficult decision to make and may cause political fallout. Nonetheless, it is important for the historical data to be similar enough to the new project to consider.

Works best if company expert was on previous project

Having subject matter experts SBE’s review your estimations provides invaluable input. If the SBE was actually on prior projects that you are using for comparison this input becomes only more relevant.

Compute the Schedule

The final consideration is that duration calculations are actually the domain of the project manager. These are the final calendar date calculations that are a function of the size and effort data gathered thus far and considerations of the capabilities of the development team. For the most part this is simple math. Duration = effort divided by resources. But, the mythical man-month effect comes into play here and adding resources cannot forever shorten duration.

Result: easy creation and modification of schedule

Create MS-Project "tasks" (from the WBS)

Enter MS-Project "baseline work" (person-days of effort) for each task

Enter MS-Project "resource names" (people)

Indicate percent available for each resource

Calendar-days = person-days divided by people

Enter "start" dates

Let MS-Project compute finish dates for you

Estimation is done more than once in a project.

Estimation is a multi-phased process. The increase in accuracy as estimation proceeds is a major concept to keep in mind, and explain to your business sponsors.

During initiation (50% accurate)
During analysis (80% accurate)
During design (95% accurate)

By definition, the initial estimate is inaccurate. So much is unknown at this point that it is impossible to achieve any degree of reliable accuracy. This is inherent to all software projects.

When the estimates do not meet the business target date you must change one of the triple constraints:

Requirements (remove features)
Cost (add people)
Time (make it later)

Conclusion

Estimation is a combination of technique and methodology. It is also a tool to achieve the long-term process improvement goals of an organization. These are important considerations. But, the methodology can fool a project manager into attempting estimation as a solo activity. This cannot establish the participative environment required for successful project completion. Instead, estimation should be approached as a team activity. Extend the methodology to gather accurate information from the people who will be involved in the project. This approach takes longer, but produces more accurate results and gets the project off to the right start. A team that involved from the start is committed and will succeed!

 

References

  1. A Guide to the Project Management Body of Knowledge, 2000 Edition, Project Management Institute, Inc., Newtown Square, PA.
  2. Capability Maturity ModelSM for Software, Version 1.1, http://www.sei.cmu.edu/publications/documents/93.reports/93.tr.024.htmlFebruary 1993, Software Engineering Institute Carnegie Mellon University, Pittsburgh, Pennsylvania 15213.
  3. The International Function Point Users' Group (IFPUG) Web site [online]. Available WWW <URL: http://www.ifpug.org/> (1996).
  4. COCOMO (COnstructive COst MOdel) Web Site [online], http://sunset.usc.edu/research/COCOMOII/index.html, Center for Software Engineering University of Southern California, Los Angeles, CA 90089.

 

Biography

Bill Hackenberg is an Organization Change Management consultant. He has a BAEd, MBA and PROSCI Changer Management, SCQAA, CSM, and PMP certifications. He has held positions as Programmer, Project Manager, Development Director and Organizational Change Consultant at Citibank/TTI, Sage IT Partners, and Cambridge Technology Partners, Toyota, Bank of America, The Capital Group and DaVita HealthCare Partners. He has delivered major IT initiatives for Citigroup, Warner Bros., AT&T, and Honda.