ITMPI FLAT 003
Main Menu
Home / Uncategorized / Interview With Pam Morris, Software Development Expert
Our interview between Michael Milutis, Executive Director of the IT Metrics and Productivity Institute, and Pam Morris Could you tell us a little about yourself, your background and what you are working on today? PAM MORRIS: My background was origin

Interview With Pam Morris, Software Development Expert

Our interview between Michael Milutis, Executive Director of the IT Metrics and Productivity Institute, and Pam Morris

Could you tell us a little about yourself, your background and what you are working on today?

PAM MORRIS: My background was originally in medical research. That continued prior to returning to university, at which point I qualified for my computer science degree. Soon after graduation, I became very interested in process improvement and measurement. Probably because of my background in medical research, I had become very conscious of the lack of rigor that we have in the software industry. I started working with Capers Jones when he came out to Australia in the late 1980s. He was sponsored by the company I worked for. At that time, I became very interested in function points as a tool for early project estimation, and I subsequently worked almost exclusively in the area of software measurement and function points. The lack of rigor around function points concerned me. Since that time, I have been committed to working with all of the function point standard bodies to ensure that the way we measure function points is rigorous. To that end, I helped develop project accuracy function point standards with ISO.

 We frequently hear that less than 20% of software development projects are succeeding. In your opinion, what are some of the root causes for this problem? Furthermore, why is it that these kinds of failure rates have been tolerated for so long in our industry? It seems unique to IT and to software.

PAM MORRIS: In my experiences on Austrlalian projects, I see a lot of development teams deciding on the scope of the project only after it has gotten into trouble, which is unfortunate. We typically find that the requirements specifications are extremely poor and that teams fail to scope the project adequately at the planning stage. And even if they do recognise that the project is very large, there has to also be recognition of the risk of cancellation. People don't like hearing bad news, and often when we are in the early project planning stage and we recommend that they make a project smaller, they will sometimes recognise that that their requirements specifications are inadequate or that their functional specifications are not detailed enough to provide to a supplier, but they will proceed anyway. We also get involved with a lot of government projects. In recent years, our Australian government got rid of a lot of their internal IT department and, instead, outsourced the development of their large software applications. In getting rid of their internal IT departments, the government here no longer has business people around who understand what is required to develop the base software. They no longer have people who have a knowledge of technical requirements, or of how to specify them and how to monitor and govern an outsourced IT project. As a result, their projects are virtually hijacked by the suppliers. It is only when they are not delivered on time or they can't actually see what is being produced that they panic and get us involved. So that is another key contributor to failed projects. I'm not sure if that's just specific to Australia. It's probably happening in a lot of organisations when they outsource. Why is failure being tolerated for so long in the industry? I think that people regard software as being intangible – something they can't see until it's actually delivered. It's not like a building where the user can visibly see progress to date. It's very hard for a user to determine progress to date. Moreover, developers have no measurement database. The percentage of organisations in software development that actually have formal measures to demonstrate these stages of the project and run reports effectively is very, very small. As a result, people keep saying that projects fail and that software development is uncontrollable. I don't believe that at all.

Why is software process improvement so important? How can it help us address some of these issues? Could you define it for us?

PAM MORRIS: Software process improvement just for the sake of improving a rating or for the sake of getting a CMMI or a CMM tick in the box is something we've seen a number of organisations attempt. Some have claimed to be level CMMI 5, and yet when we go into count function points in the specifications, we would rate them as level 1. It concerns me that a lot of people see process improvement as a tick in the box and that they actually don't practice what they've been accredited to do. To me, process improvement is about doing things properly, being cost effective and efficient at developing software and being able to measure things properly. It is also about knowing where your weaknesses are and being able to identify quantitatively and effectively what those weaknesses are and then being able to demonstrate improvement. We don't see a lot of formal CMMI programs here in Australia, apart from the outsourcing companies. Most of the smaller companies are just trying to do things better and more efficiently. The large outsourcing companies are required in their contracts to be at a certain CMM level, and they implement process improvement to enable them to get to that level. It is unusual to find organisations implementing process improvement primarily to make things better. The critical issue here in Australia, and I imagine it's similar in the U.S., is that the cost of developing software is pretty significant and the labor costs are extremely expensive in relation to what the customer actually ends up getting. Not surprisingly, a lot of our software development is now being offshored. When that off-shoring decision is made, people are looking at the hourly rate of developers, rather than what they're actually getting for their money. And what I would like to see is much more of a focus on looking at the dollar cost per function point delivered (or enhanced or maintained or supported) rather than at the dollar cost per hour of developers. Because in some of the measures that we are involved with, the domestic teams actually develop more effectively; but the management, when they're making these decisions to go offshore, always seems to focus on the hourly rate for development, not on how many function points they're supporting or developing.

How does your firm help your clients in the area of software process improvement?

PAM MORRIS: We don't specifically work in the area of software process improvement. What we're focusing on is the implementation of measurement to enable our clients to identify their current level of performance and then to identify areas of weakness that they can target with process improvement. Once we start collecting and analysing the data, we help them interpret what it is they are seeing and then we identify where their weaknesses are. We then make recommendations. For example, I've just finished an analysis of one of our clients and it demonstrated that eighty percent of the defects delivered in production were originating in the build phase of the software development lifecycle. So, we're making recommendations that they target this stage and get more formal unit testing there and extend unit testing processes that they already have in place.

It seems that no matter how you look at process improvement, everything always comes back, in the end, to measurement. “You can't manage what you can't measure.” We hear that often enough. However, a key problem with measurement is that there are so many different types of metrics and so many different consumers of metrics. In light of this, how do organisations best determine what they should be tracking? What kinds of questions should they be asking themselves in order to select the right key metrics?

PAM MORRIS: There are two issues here. It's a bit like a chicken and an egg situation with organisations implementing measurements. We find that the success of a measurement program really depends on the maturity of the organisation. But how does an organisation become mature if they haven't got measurement? We find organisations that are so totally chaotic that if they ask us to come in and assist them in implementing measurements we're just measuring chaos. And if all of the processes are in chaos, then the measurement process is in chaos as well. What we would normally do for an organisation that is at a very low level of maturity is to find one or two key things that they want to improve. Then we would identify a very small number – less than four – basic measurements for them to collect and then target these key areas. Before we tell someone what measures they need to implement, we always work with management to try and find the key result areas that they're trying to address and identify what indicators would need to be collected to monitor those areas. The good thing is that in measurement we always try to start off small and do things in increments. Any measurement program that is involved in collecting a large number of measures is almost doomed to failure, because it takes almost two years for the measures to start to be statistically informative enough to give real insight. And the larger the program is, the more it will cost by that point and the more chance it will have of being cancelled. On the other hand, if you can get a quick return on areas that are of concern for an organisation and position yourself to make recommendations quickly, then your analysis will start to gain traction within the organisation.

Once people have been able to identify what their key areas are – their core metrics – won't they face a lot of cultural and technical constraints when they attempt to gather their data? How can this be overcome?

PAM MORRIS: One of the things that is often spoken of is automation and I think that automation does play a key role here. People perceive measurement as being an overhead. I was talking to one of my clients who is very measurement oriented and he said that he couldn't possibly run a company without an accounts department and no one would ever see the accounts department as being an overhead. And to me that is exactly true about measurement: it is more than overhead; it is an absolutely key component to any process. And automation can help remove the overhead stigma because figures can be collected more accurately and regularly and collection will wind up becoming a part of the very fabric of the work culture. Unfortunately, function points have yet to be automated. It's not how I wish it would be, and one of our objectives in trying to standardise the methodology was to try and enhance the capability for function point automation. I also think that culturally you have to be very careful about how you use numbers. It is very important that management presents the numbers as a way of encouraging people rather than as a way of assigning blame.

What is the role of management in all of this?

PAM MORRIS: The management has to be driven from the top down. If we see a measurement program with a low level champion, they just complain about the decisions because management won't use the numbers as input into decision making. When you see that senior management can actually recognise the value of measurement and they are pushing it downwards, the staff will start recognising the importance of accurate and consistent data collection. That would be one of the key success factors across situations where measurement is working: management recognition that measurement is a process needing adequate resources and budgets, not to mention experienced and skilled people. Too often it is the people who are not good at much else who are allocated to the measurement department, which is unfortunate because the rest of the staff then perceives measurement as being unimportant.

One of the things we often see is organisations that get stuck in the trap of gathering metrics for the sake of gathering metrics. Once we've identified our metrics and then develop a process for collecting the data, how do we go about analysing that data in order to derive meaningful and actionable information from it?

PAM MORRIS: I would invert that question. I think you need to start by determining what actionable information you need. And that point you can ask yourself: what kind of analysis would provide that? What type of reporting would you need? What measurements would you have to collect to support that analysis and how frequently would you need to collect those measures? It's very frustrating for us, because we constantly get asked to take a backwards approach and to go in and train an organisation in function points. And if we do that, we're wasting our time. If they haven't worked out what they're going to do with the numbers at the end of the course, they will say, “Well, what are we going to do with this number now?” Training the staff in function points should be one of the last things, not the first. The fact is, function point metrics – in the end – may not be the right solution for what they need to do. So it is important to be very clear about what information management needs, what decisions they need to make, what information would support those decisions, what sort of analysis would need to be done to provide that infomation, and then what kind of data they should be collecting to support that. If you approach things in this manner, your measurement program will have purpose. It will also have management commitment in the budget and a high level of importance. Measurement is very important, but not unless you know what you're going to do with the numbers.

Could you elaborate on some of the various measurement methodologies that are in use today, differentiate them, and maybe even comment on how an organisation can best decide which methodology is right for them? For example, there are a lot of different function point variations out there – how do you wade through them?

PAM MORRIS: I can comment on it from a functional size perspective, because I have been heavily involved in virtually all of the different functional size methods. In fact, one of the things that we have done to address this issue is to publish a document, the 14143 ISO standard on functional size measurement. It's a document that specifically addresses how to choose a functional size measurement method. And it talks about which methods lend themselves to different types of environments more effectively than others. COSMIC is an excellent method, for instance, but we find from our experience that it needs a more mature organisation to implement it. It hasn't got some of the infrastructure that is already there with IFPUG function points, in terms of tools and availability of training, but by the same token, COSMIC addresses a lot of software that previously has not been counted, e.g. process control software, electronics software and military software. That's because people have found it difficult to map the functionalities for such software back to the IFPUG measure. The 14143 paper goes through all of the different parameters you need to look at. This subject area is a mine field for people and that was one of the reasons why we wrote the paper. The exact name of the paper is ISO Standard 14143, Part 6, and it's just been published in the last few months, so it's only just become available.

 Function Points do not take maintenance into account. In light of this, how does one come up with a realistic approximation of maintenance costs at the beginning of a development project? What is the best way to address this?

PAM MORRIS: One of the things that we have found is that function point maintenance is a very good indicator of the number of people that will be required to support an application in terms of minor bug fixes, help desk calls and just keeping applications going. We've been collecting these figures for years. Capers Jones first noticed the correlation between supported effort and function points back in the 1980s, and I would say that this correlation is still there and still very strong. It's not a key metric in measuring maintenance, but it's quite an interesting one.

Can you comment on some of the more salient differences that you see between the Australia/Asia Pacific region and the United States in terms of dollars being invested in IT, extent of offshore outsourcing usage, use of measurement, and acceptance of process improvement initiatives such as the CMMI?

PAM MORRIS: It's hard for me to comment on the United States. I guess my exposure internationally comes from visiting the U.S., talking to people at IFPUG and also speaking to people at various conferences about what is happening in the United States. And what I hear is that a lot of the same things that are happening in the U.S. are happening in Australia. A lot of the IT support and IT development is being offshored for the very large organisations and that is a concern in both countries. It's also one of the reasons that I would like people to measure dollars per function point.

Could you talk about that more specifically, about the dollar per function point method?

PAM MORRIS: It's a methodology that was first developed here in Australia and adopted later in Scandinavia. It looks at fixed pricing for software development. You supply a fixed price bid that is based on dollars per function point. However, that requires knowing the environment that you are going to build in. It also requires knowing the approximate size and then bidding in such a manner that you can accomplish this. The dollar per function point method improves the quality of your specifications. That's because if users are paying for every function point, the developers have a vested interest in getting the specs absolutely complete, so that they can be paid for all of their function points. It also gives the users the flexibility to make changes – and not be billed high rates for minor changes – because there is now an upfront agreement concerning the timing of such changes and also, the cost of the change relatively to its functional size. The dollar per function point method gives control back to the users, because they now know that they can control the cost by removing functionality from their requirements. And if they put more functionality in, they will be able to come up with a revised budget. It takes a lot of risk out of the supplier-client relationship, too, because the charging model is already agreed upon up front. It's quite an effective way for comparing offshoring as well. One of our Australian clients was trying to demonstrate to their American parent company that it was a lot more cost effective to develop in Australia than in Southeast Asia. The American parent company was merely looking at the hourly rate of a third world country developer, rather than the Australian developer's rates, which appeared on the surface to be fairly comparable to those in the U.S. But when they looked at what both developers were actually achieving in an hour, the Australians were achieving many times more than what the developers overseas were achieving. One particular organisation was distributing their software development around the world and they had development shops in different countries developing software. They started favoring the countries that had low hourly rates because they thought that they were getting more value for the money, but they were only looking at the hourly rate. They weren't actually looking at what those countries were developing for that dollar cost. So it is more useful to look at it over a six month period and see what you paid and what you actually got. And that, to me, is far more important than looking at an hourly rate. It is the same with function point counters. Our function point counters count hundreds of function points a day. I have encountered function point counters that count 14 function points a day. They are certified counters and that is the measured rate within the company. They count fourteen when we would expect to count anywhere from two to six-hundred a day depending on the quality of documentation. And yet, I've seen organisations consider moving function point counting offshore. They say, “If we can offshore the function point counting to another country, we will only have to pay them $5 dollars an hour as opposed to the x number of dollars an hour we pay you.” But they count function points at an exponentially slower rate. It is all about seeing what you are actually getting for your money, rather than what you are paying per hour.

Is it true that the Australian government mandates the dollar per function method for all government IT projects?

PAM MORRIS: It mandates it for high risk projects. Before they get government funding, high risk projects have to demonstrate better than expected dollars per function point. However, the government does also mandate that industry dollar costs be applied to determine that the budget various government departments are asking for is realistic. What used to happen was that the Department of the Treasury would ask for a cost benefit analysis. They would estimate that a project was going to cost a half million dollars, get the go-ahead, and then a year later they would come back wanting more money because they had only gotten up to functional requirements. And then after 5 years, when they finally reached the end of the project, they would wind up having spent 20 million dollars. And that is something that they never would have gotten permission for at the start of their project. So they now make everybody do a very rough function point analysis to scope the project up front, e.g. is it 500 function points or 5000? And then they look at the data and determine that, based on $1000 per function point, the project will wind up costing x number of dollars. This keeps the budget realistic and in line with what industry would expect a project of similar scope and scale to cost.

Do you see this happening in any other countries, in terms of the government taking such a strong position?

PAM MORRIS: It's been heavy in Finland and throughout much of Scandinavia as a methodology, and I understand it is being used there. I know the government office in Canada was looking at it. I think the UK government office was also looking at it, along with Korea and Japan, as too. I've spoken in Japan a couple of times. They've asked me to go in there and speak about this methodology. I've been invited to speak about it in Korea, too. There are a lot of countries that are interested in this, and not necessarily just at the government level.

Do you see a lot of interest in this in the United States?

PAM MORRIS: No. I gave a presentation there last year and a few people were interested, but I haven't seen any significant interest in the United States. I have seen more interest from countries that are just starting to get involved in measurement and software development on a large scale. South Korea has actually mandated that all government projects be functionally sized and that an estimate based on functional size measurement be submitted before a project receives government funding. That's actually been mandated in Korea. I gave a presentation in China last September and they were very interested in the dollar cost per function point model. That's not surprising. They are being pressured to demonstrate their productivity against other more traditional offshoring countries. If they're going to compete against India, for instance, they have to be able to demonstrate that they can offer you more value for your money.

Biography of Pam Morris

Pam Morris has over 20 years experience in software development and since 1989 has specialised in the area of software measurement and process improvement. Pam is currently the Managing Director of Total Metrics, which she founded in 1994 in response to the great need in the software industry for better management and control of development processes. Pam is past president of the Australian Software Metrics Association (ASMA) where she currently holds a position on both the Executive and Benchmarking Database Special Interest groups. She also represents Standards Australia as the international project editor of the ISO standard 14143-1 and 2 for Functional Size Measurement. Pam plays an active role internationally in the development of measurement standards and was a member of the International Function Point User Group (IFPUG) Counting Practices Committee in the USA from 1993 to 2000. In 2006 Pam was awarded the Australian ITP Lifetime Achievement Award for her services to the IT Industry.


Comment 1
Comment ID: 1587
Comment Date: 2012-11-07 03:18:45
Comment Author: dravid
Author Email: abraihmlinkon@yahoo.in
Author Url: http://www.goodlookingloser.com/2012/10/20/how-to-put-on-the-sizegenetics/
Author IP: 116.202.204.142
Comment Text:
Oh very great information thanks   http://www.youtube.com/watch?v=DXvxr8_A5F4

About Anne Grybowski

Anne is a former staff writer for CAI's Accelerating IT Success, with a degree in Media Studies from Penn State University.

Check Also

The Seven Activities of Project Closeout

People go crazy when a TV show like Firefly or Agent Carter gets canceled, because …

Leave a Reply

Your email address will not be published. Required fields are marked *