ITMPI FLAT 005
Main Menu
Home / Uncategorized / Focus on David Garmus, Co-Author of Function Point Analysis: Measurement Practices for Successful Software Projects
CAI: Could you tell us a little about yourself, your background, and what you are working on today? DAVID GARMUS: I graduated from UCLA and have an MBA from the Harvard Business School. After my undergraduate studies, I served twenty years as a Navy

Focus on David Garmus, Co-Author of Function Point Analysis: Measurement Practices for Successful Software Projects

CAI: Could you tell us a little about yourself, your background, and what you are working on today? DAVID GARMUS: I graduated from UCLA and have an MBA from the Harvard Business School. After my undergraduate studies, I served twenty years as a Navy officer, during which time the Navy sponsored my graduate work. My Navy experiences brought me into contact with the IT field when it was in its very early development stages. After retiring from the Navy, I worked as a project manager and then as a development manager for the CACI Development Center in Pennsylvania. I got involved at CACI in software measurement and function point analysis. Following my stay at CACI, I worked with Capers Jones for two years at SPR. Currently, I am a Principal of The David Consulting Group, which David Herron and I founded on March 1, 1994. The David Consulting Group is a software engineering Capability Maturity Model Integrated (CMMI) approved transition partner that helps software development organizations achieve software excellence using a metrics centered approach. As a CMMI transition partner, we perform SCAMPI appraisals, give CMMI training, and use CMMI to identify and implement best practices. As an expert in function point analysis, I help clients of The David Consulting Group estimate effort, size, and schedule for development projects and application support. During a typical evaluation, I recommend techniques, additional tools, environmental changes, changes in personnel, changes in the way they manage, and improvements in their development process. I also serve as a federal court witness testifying on cases that revolve around the sizing evaluation of software. Most of my testimony has been in cases involving the Internal Revenue Service. I'm also a past president of the International Function Points Users Group (IFPUG), and I've continued to serve as a member of IFPUG's Counting Practices Committee (CPC) since 1989. The CPC is the organization responsible for developing and maintaining the Function Point Counting Practices Manual, and serves as a forum for resolving issues in counting methodology. Additionally, I'm a member of QAI, PMI, SEI, and IEEE. Consequently, I speak at a large number of conferences. I also keep myself pretty busy with writing projects. In addition to numerous articles, I've written several books with my partner David Herron.

The Standish Group reported in 2000 that 70% of all software projects were coming in over budget, over schedule, or not at all. What is the current state of software project estimation today?

DAVID GARMUS: Estimation is like the weather: everyone complains about it but no one can do anything about it. Most IT organizations admit they don't estimate very effectively. I would say that those Standish Group numbers are probably just as true today as they were when they were first reported. Clearly, there are solutions to the estimating problem, and these solutions are not expensive to implement. However, estimation is not viewed as a priority within most IT organizations. What that means is that most organizations are not getting involved in the solution. The organizations, however, that do get involved – and that start collecting measurement information – tend to be very successful at estimating their projects and at improving their estimates over time. They tend to have a better relationship with the business users, too, and they certainly have a better chance of accurately predicting their project costs. One of the reasons I left CACI was that we were working on a large government project that was taking years to deliver instead of months. I've since come to realize that, when delivering software, it is better to deliver releases over a shorter period of time so that people can continually see the benefit of what you are doing. When a project is delivered in pieces, people can start using it right away rather than waiting years for a single release.

Why is software measurement so important for project success? What are the key concepts and principles that IT executives need to understand about this subject?

DAVID GARMUS: In our book, (Function Point Analysis: Measurement Practices for Successful Software Projects) David and I related software measurement to “expectation management.” Software measurement enables managers to properly set expectations for cost, effort, and schedules- for both the development and the maintenance of software. Measurement also enables organizations to forecast and model the financial gains that are latent in various process improvement initiatives. At the David Consulting Group, we've tried to establish performance benchmarks and key indicators in order to build models which predict likely outcomes of projects. We use our models to survey organizations, benchmark the environment in which they operate, and to evaluate the risks of current development activities. We look at each phase of development: requirements building, design, code construction, and testing. We then derive a list of questions to help the organization identify the strong and weak aspects of their projects. As companies build upon this, the evaluations can be used by C-level management to set expectations surrounding the cost and benefits of proposed improvement strategies and to set performance targets for projects. The bottom line is that “you can't manage what you can't measure.” If there's any one thing that IT executives need to understand about software measurement, it's that single, basic principle. CAI: What are the relative advantages of function point metrics versus more traditional metrics, such as lines of code? How pervasive is the use of function points in our industry right now? Do you have any statistics on this? DAVID GARMUS: The software sizing technique that delivers the greatest accuracy and flexibility is the function point methodology. At The David Consulting Group we count lines of code when they are available, use models such as COCOMO and Predictor, and we are also involved in Use Case Points. However, these sizing metrics tend to be dependent upon information that is not available until later in the development lifecycle. Function points, on the other hand, are easily determined from information available early in the project, information such as a user requirements document or a functional specification. A function point estimate can even be derived from the information available in an early proposal. Function points enable us to do a better job of managing projects, too. When building an estimate using the function point approach, the functional aspect is reviewed at the design stage as well as each time there is a change to the project. This makes it easier to manage expectations. If the customer requests an addition to the project, for instance, you can reply, “Based upon the additional functionality that you are requesting this is going to be the additional cost and the additional time frame to deliver.” This makes it much easier for a business user to make decisions. There are thousands of function point users around the world. The International Software Benchmarking Standards Group (ISBSG) just reported that 90% of their data is based upon IFPUG function points. Moreover, major users tend to be in areas where IT costs are very significant to the organization such as banking, insurance, software development, and telecommunications.

There are so many different metrics and so many different consumers of metrics out there. In light of this, how can organizations best determine what they should be tracking and measuring?

DAVID GARMUS: First of all, they must determine who their stakeholders are. Stakeholders are the people who are being served by the metrics. This includes project managers, team members, business users, and people in the corporate environment that will be utilizing the software and paying the software development bills. Regarding the selection of metrics, I recommend three key areas: amount of effort, calendar time, and number of defects delivered. Effort estimation helps us measure project size and cost. That's because cost today is primarily driven by the hours of effort required to build or maintain an application. This is different from the early days of IT, when equipment cost was the most significant cost driver. Calendar time is also important, particularly for those organizations that are dealing with the public. Regarding defects, most of the companies we visit don't keep track of their defects during the course of a project, but they do track defects once the project or the application has been delivered to the customer. Thus, post-delivery defects are usually the simplest defect metrics to obtain. If you compress any one of these three measures, the other two usually balloon out. For example, if we try to cut costs by reducing the level of effort, the schedule usually increases and the number of defects delivered certainly increases as well. CAI: For organizations that have been very effective in setting up their measurement programs, what is it that they are doing right? DAVID GARMUS: First of all, effective measurement does not necessarily result in better specifications. It doesn't stop the user from changing the scope of a requirement, and it doesn't change critical deadlines that are essential to the competitive position of the organization. However, accurate measurement based upon an historical baseline produces an evaluation of the risk involved and the likelihood of project success. Nevertheless, if you are armed with the right information, you can set expectations and better manage desired outcomes. When we build a measurement program in an organization, we take responsibility for having the skills and knowledge necessary for building that program; specifically, function point counting skills, knowing how to use available tools to assist in sizing, knowing which tools are available to assist in estimation and project management, and knowing how to conduct the analyses. This knowledge base is frequently missing at organizations that haven't embarked on a measurement program before. Organizations that get this right tend to understand what it is that they are measuring and why they are measuring. There are numerous outputs that are produced from the measurement process – project costs, level of effort, defects, and schedules – and when planning a measurement program, an organization must develop a well-defined set of such deliverables before putting a process in place for collecting the data. CAI: For organizations that are interested in going down this path for the first time, do you have any advice or any caveats? DAVID GARMUS: Organizational awareness that ineffective measurement is a problem is a key starting point towards institutionalizing successful measurement practices. People have to be in favor of measuring. They have to have a reason behind doing it, and they also have to assign responsibility for the measurement process. We did a study with QAI about 10 years ago that looked at what makes a successful measurement program. Organizations that tended to be successful at that time, and this still holds true today, were those organizations that established measurement as a key strategic area and that, consequently, maintained a centralized resource to remain focused on it. Although project managers should be receiving and participating in the collection of metrics, successful organizations do not assign sole measurement responsibilities to project managers. They are already overworked. Moreover, project managers have obvious biases when it comes to managing metrics that may make their projects appear to be more successful than they really are.

The aggregate productivity of software organizations has remained relatively flat for the past 25 years. Where is productivity trending today and what does IT need to do over the next 5-10 years to see significant improvement in this area?

DAVID GARMUS: One of the reasons productivity has diminished is that we've reached a point where calendar time tends to be a leading driver in our projects. As we strive to develop faster, we typically drive down productivity. However, on the whole I think that productivity is actually improving because of the tools and data available today. An organization's capability to deliver software in a timely and economical fashion is influenced by a variety of risk factors. When assessing risk factors for our estimating model, we collect information about morale, the skill sets of the team, the environment in which they work, the skill sets of users describing requirements, the amount of automation involved, the business environment, etc. All of these factors have an impact on an organization's ability to deliver high quality software in a timely manner with a higher productivity rate. We should always be aware of the factors that drive productivity and understand why productivity in one project is different than productivity in another. This is particularly true for offshore outsourcing. Offshore projects deliver results for fewer dollars than in-house projects. However, this is usually accompanied by decreased productivity, higher calendar time, and a higher number of defects. The cost has gone down because we are using resources that are less skilled and that also lack familiarity with the business nature of the software being developed. We've assisted a number of our clients in improving productivity and quality, achieved improvements by changing things as simple as the requirements definition process; that is, making sure that requirements are well defined before we start. Organizations often want to start coding as soon as possible. But if they code before requirements are well defined, they will end up doing a lot of recoding and this will raise project costs considerably. Another simple improvement that we frequently administer is to conduct reviews and inspections early in the process. By early I mean at the time of requirements and design, at code reviews, and before the code is passed to the test team. Early reviews reduce the costs of testing and rework as well as the costs of maintaining that software after delivery.

Biography of David Garmus

David Garmus is a Principal of The David Consulting Group (DCG), an SEI CMMI® Approved Transition Partner and a PSM Transition Organization that supports software development organizations in achieving software excellence with a metric-centered approach. David is an acknowledged authority in the sizing, measurement and estimation of software application development and maintenance. He serves as a Past President of the International Function Point Users Group (IFPUG) and as a member of the IFPUG Counting Practices Committee. He is also a member of QAI, PMI (and their Information Systems Specific Interest Group) SEI and the IEEE Computer Society (and their Standards Association). David is the author, along with David Herron, of Measuring The Software Process: A Practical Guide To Functional Measurements as well as Function Point Analysis: Measurement Practices for Successful Software Projects. Our interview between David Garmus and Michael Milutis, Executive Director of the IT Metrics and Productivity Institute, took place in May of 2006.

About Matthew Kabik

Matthew Kabik is the former Editor of Computer Aid's Accelerating IT Success. He worked at Computer Aid, Inc. from 2008 to 2014 in the Harrisburg offices, where he was a copywriter, swordsman, social media consultant, and trainer before moving into editorial.

Check Also

The Seven Activities of Project Closeout

People go crazy when a TV show like Firefly or Agent Carter gets canceled, because …

Leave a Reply

Your email address will not be published. Required fields are marked *