ITMPI FLAT 001
Main Menu
Home / IT Best Practices / Interview with the Father of the Wordwide Benchmarking Project

Interview with the Father of the Wordwide Benchmarking Project

This  interview occurred  between Howard Rubin and Michael Milutis, the IT Metrics and Productivity Institute's Executive Director, was conducted in January of 2006.

Could you tell us a little bit about yourself, the path your career has taken and what you are working on today?

HOWARD  RUBIN: In the early 1990s I got involved in the balanced scorecard project and as a result, I learned a lot about technology measurement in a business context. I started collecting more and more business data, not just technology data, and I also started focusing on measurement design. It was at about this time that I was asked to help the government of Canada develop a facts-based approach to IT management. This project eventually became known as the Worldwide IT Trends and Benchmark Project. Concurrent with this, I was actually also working as the head of the Computer Science Department at Hunter College in New York. I was the Chairman of the department, running the doctoral program, but also using my own resources to build a company outside of the university that would be focused on conducting benchmarking work, competitive calibrations, and scorecard development for companies. Coming off of the Worldwide Benchmarking Project in the late 1990s, I decided to commercialize the idea and so I set up a vehicle on the internet called Metric-net which had the purpose of collecting data worldwide through data interchange and data bartering. Eventually, this became the basis for the “future” worldwide benchmark project and database, which ultimately got integrated into Metagroup. Before Metagroup  acquired the benchmark database, I was still doing benchmarking, metrics and analytics work first hand inside companies. After the acquisition, I became an officer of Metagroup. Recently, Metagroup was acquired by Gartner and Gartner continues to update the Worldwide Benchmark database annually. What I do now is consulting. I work with very large enterprises, usually those with over a billion dollar IT expense, and I help them define their use of measurement in both a business and technology management context.

What are some of the biggest changes that you have seen in this field since you got started?

HOWARD  RUBIN: One of the biggest changes that has taken place in this field is that you can't just benchmark IT alone anymore. This is because technology has become so much more pervasive. Benchmarking has also typically been a periodic exercise, one requiring lots of effort, internal work, long-term  data collection, interpretation and integration. But I think we are going to see IT benchmarking turn into much more of a real-time market data stream, one in which companies build their own competitive and operational scorecards, complete with key metrics, and it flows in just as if you were getting a feed from Bloomberg. My belief, in fact, is that benchmarking will have to become a continuous exercise; a continuous exercise that gets fully integrated into the management process, at least as frequently as companies update their financials or budgets.

How do organizations interested in benchmarking best determine what they should be measuring and how they should be measuring?

HOWARD  RUBIN: I think the key thing for organizations is bi-directionality. That means your approach to benchmarking must come from both the top and from the bottom. From the top, you really have to understand your technology costs-  the costs of your technology goods and services- almost as if you were a manufacturing company. You have to understand the cost structure of technology, what its impact is on your margin and what the impact of your technology investment is on growth, shrinkage and market share. And you have to integrate your understanding of the cost structure and performance structure of technology directly into the company's financials. You also have to figure out who you want to be looking at, in terms of comparisons. Is it direct peers or is it organizations that have a business performance structure that you aspire to meet? Another point I should make about the choice of measurements from the top is that there is this thing called the balance scorecard, in which people look at their finance measures, customer related measures, profit measures and organizational measures; but these are just static measures. That means that if a company's strategic objective is to be the number one player within a given market, or to have the most comprehensive view of the customer, the balance scorecard isn't going to cut it. It is directional measures, as opposed to static measures, which will tell you where you are moving versus where you would like to be and what your corresponding rate of change is. And there are basically three kinds of directional measures: positional measures, directional measures, and velocity measures. In short, you need to be benchmarking where you are, where your targets are, how fast your organization is moving and how fast the world is changing. And all of this must be done within the context of strategy. Approaching things from the bottom, you really have to understand a lot about technologies and about the technology organization itself. That means much more than just knowing how long it takes to develop an application, or the quality of your software, or the customer service component of your technology. It means you need to look at technology as a commodity, at the unit costs. You need to be able to understand, almost like having a technology catalog in front of you, what all of the technology components of your business consist of. What are your volumes? What are your unit costs? What are the costs to your competitors? What other alternatives are available out on the street in the open market? And there are some other aspects, too. If you are a CFO, for instance, you really ought to understand where technology hits your P&L, where it impacts your salaries, your expense, and your depreciation. It is very important to understand how fixed or how variable your technology costs are. Finally, there is a kind of ethereal dimension that sits on top of all of this, one which involves how well you are using technology to innovate and change your business, as compared to your competitors. In the end, what companies really need is a full navigational system. Something that will give them the instrumentation to get them where they want to go, as well as the external calibration to see if someone is going to get there first, second, better, cheaper, or faster.

How do you see the practice of benchmarking changing in the future, in terms of both content and usage?

HOWARD  RUBIN:  I frequently visit companies that claim they are a collection of so many different units and businesses that they can't be benchmarked, that comparisons are simply not possible. However, one thing that is changing about benchmarking is the introduction of a new technology, one that I've been working on, known as synthetic benchmarking. Synthetic benchmarking essentially enables you to build a model of a company from the piece parts of other companies. So if you are a financial services institution with a global market component and private clients and a wealth management branch, you will be able to cobble together pieces of a Goldman, a Smith Barney, and a UBS. The mathematics and the data are now available to build a model of any company and size and to determine what it should look like when it is performing at an optimum of technology. And the content is changing in terms of benchmarking, too. The content is shifting away from simple measures of the IT spend ““ for example, the IT spend as a percentage of revenue – to the examination of IT spending from a P&L perspective, from a finance perspective, and from an earnings per share and economic value-add perspective. More people also want to know how they compare on a process level. And there's a lot going on now in terms of people skills, too. With synthetic benchmarking, the content will continue to become more and more multi-dimensional, and far more business facing.

What are some of the major challenges that most organizations encounter when they first get started with measurements and benchmarking? What are some of the most common mistakes made? Do you have any caveats for organizations that are undertaking this for the first time?

HOWARD  RUBIN: When you first get started with benchmarking, and you haven't done it before, you are basically going to be comparing data that you have internally with external data. Consequently, people will get their internal numbers and then they will get their external numbers and try to compare the two things right away. They will be looking for insights and conclusions and hypotheses. However, after the first round of benchmarking, you should really be making an effort not to look for insights and conclusions. You should be focused on rationalization. First time starters need to understand that rationalization is part of the benchmarking process. It is not a precursor to the process. The other issue with first timers is the availability of data. It is very important to overcome the fact that you may not have a complete set of data available internally. This is always going to be an issue. Consequently, my recommendation is to look at your benchmark program as if it were a step function program: take a small core, build out, step up, sort of ratchet, take the key questions needed to answer the first, and have the benchmarking provider map your structure. You don't need to do everything at once. You can build things up throughout the process. A final caveat involves management by numbers. For example, you will find many large organizations that have gone through multiple mergers and that haven't shed any of their redundant systems or redundant technologies. Certainly they can do better. But the path upwards is not going to be visible just by looking at the numbers. There may be a whole lot of other things that have to happen first. This is especially true if you are using benchmarking for internal target setting. My brother is a really fine physician and he always advises his students not to look at the numbers but rather, to look at the patient. That's an important caveat in benchmarking, too. The numbers will give you calibration. They will help you understand what side of the benchmark you may be on. But the goal is not to be better or worse than the benchmark. On either side of the benchmark, you can be learning how to improve your position.

You are known, among other things, for having collected and organized data into one of the world's largest information technology databases. Could you give us more information about this repository? For example, what kinds of metrics get tracked? How broad is the technological and geographical representation?

HOWARD  RUBIN: The Worldwide IT Trend and Benchmark Database was really formalized in 1994. It was a project, as I mentioned before, which started out within the Canadian government. They were trying at the time to develop a global view of technology utilization in business. In its current form, the Worldwide Benchmark Database maintains data on more than 10,000 large companies, each typically over 500 million dollars in revenue.   It covers companies that are based across 100 countries, so it has a really massive geographic spread.   There is also a large diversity of data, everything from basic business and IT spending data, to detailed data on technology platforms, programming languages, application development productivity, application quality, size and number of personnel, compensation, practices and processes, and process maturity.   You will even find customer service related data. The database is also updated continuously. We use internet based surveys for this as well as data collection mechanisms that originate from within our own consulting engagements. Consequently, we are able to keep the data fresh, on a daily basis, and we are able to update major trend levels on a quarterly basis. What that means is that if we see a major business or political change, we can sample thousands of companies within a 24 hour period to see if there is any movement. I don't think anyone else in the world right now has the capability to determine, within 24 hours, the effect on business decision-making and technology that a world event may have. You originally asked me about how benchmarking has changed over time. Traditionally, benchmarking has been used to compare current data to historical data. What we are seeing now with the worldwide benchmarking database, however, is the comparing of current data with current data. That's an important development in my opinion because data is kind of like produce: it gets rotten after a very short period of time.

You recently published the Gartner Worldwide IT Benchmark report for 2006, which is focused on the tracking of global IT spending patterns. What do current IT spending patterns tell us?

HOWARD  RUBIN:  For a while, IT spending was experiencing double-digit growth. After 2001, though, you really started to see a pullback. Now we are seeing IT spending on the rise again, increasing at a rate of about 3-5% per year. Perhaps more interesting is the fact that from 1980-1990, IT spending totaled 600 billion dollars. From 1990 to 2000, IT spending totaled 3 trillion. From 2000-2005, in just five years, it reached 4.3 trillion. And IT spending in 2006 alone will probably reach 1 trillion. What that means is that each year we are looking at in the present is worth three 1990 years and fifteen 1980 years. That's important to keep in mind, if you want to understand the pace of change. Another interesting trend involves the shifting of IT spending within the overall IT portfolio. With the pressures on IT spending in the early 2000's, people started canceling development projects and consequently, infrastructure costs started eating up the main IT portfolio costs. People just weren't building new systems. But with all of this new money starting to come in, the money is going to development. That's the new trend right now. The money is going first to development and after that to infrastructure.

What does this tell us?   What is the deeper meaning of this?

HOWARD  RUBIN:  What it means is that companies are trying to get more strategic differentiations through their systems again. And that also means we are going to see more integration. And this is not just Sarbox  and identity related stuff. This is noncompliant spending as well. What it means is that we are going to start seeing technology being used to fight more competitive battles. Technology spending is going to be king in terms of strategic advantage and leverage.

What trends do you see coming out of the Asia-Pacific region over the next 2-3 years and what will this mean for U.S. Firms?

HOWARD  RUBIN:  Historically, if you look at IT spending patterns, you will see that the US has led the pack. But this has sort of come and gone in waves. What we see now is more overall technology investment in the Asia-Pacific region, as companies take steps to catch-up, compete, or move ahead in technology spending. This is true across manufacturing, automobiles, electronics, banking, insurance- the total technology investment. Technology is going to be the way for the Asia-Pac region to reach existing markets in North America and Europe and emerging markets in China and India. This is virtually new territory. It is not a case of replacing old systems at branch offices. It is the penetration of a whole new world through the internet and electronic communication. And the Asia-Pac firms are going to be bumping the technology investment and technology spending faster than anywhere in the world

What do you believe has to happen in Information Technology over the next 10 years for the US to see significant advancement? What are the major challenges facing us over the next 10 years and what should we be focusing on?

HOWARD  RUBIN: There are so many forces at work that it's really going to be hard to predict what happens. Nevertheless, we are seeing now a tremendous growth in servers and even more growth in storage. Businesses everywhere are using information more intensively, and this is going to require a need for a more powerful infrastructure. Naturally, this will lead to a growth in servers and in storage. But even though these commodities are getting cheaper, the transition is going to be problematic. That's because the new technology is far denser. The new data centers are going to require a much more power intense technology. That means a couple of things are going to have to happen. First of all, companies are going to have to deal with the remodeling of their data centers. They're going to have to start focusing on utility computing-  i.e., more of the grid model of computing- to get capacity on demand. Why? Because they probably won't be able to afford the cost structure of the new infrastructure. Another thing I see, and this is nothing that the other futurists can't tell you, is the absolute pushing out and pervasiveness of technology. This is part of the reason I advise organizations not to look just at IT but rather, at the total technology spend. It is going to be very hard to draw the line between technology spending and IT spending, especially over the next 10 years. Technology is going to get into the very fabric of our lives- our clothes, our eyeglasses, everything around us. It will not just be the tools we use in the workplace. As a result, digital infrastructures are going to be merging, the economics of technology will need to be much better understood, and IT is going to be at the heart of a lot of it. As technology moves more and more into the very fabric of our existence, the real-time existence of consumers and businesses and the economy, the reliability of systems will need to reach the level of dial-tone. Consequently, the techniques used to develop systems, the quality of these systems, and the demands on the performance of these systems will all need to be higher than anything we've ever imagined. This, in turn, means that company stock prices will start to be impacted by project slippage. Project slippage is going to start taking out companies. The corollary to all of this is that, to survive, you will need to be highly efficient, the tools you use will need to be well beyond what we know now, and your project management disciplines will need to be off the charts.

The IT industry seems pretty far from that right now. The Standish Group, for instance, reported in 2000 that over 70% of all software projects undertaken by large, small, and mid-sized organizations came in over time or over budget or not at all. Given the fact that most of the information on software best practices has been around for 20-25 years, what do you attribute this to? What is it about the software industry that makes it so intractable, so resistant to the methodologies and processes that are taken for granted in other engineering disciplines? Have we simply not yet reached a point in the evolution of technology where it is necessary, as you just described, to have mission-critical rigor with everything that IT does?

HOWARD  RUBIN:  First of all, let me give you an interesting statistic. Since 1980, more than eight trillion dollars have been spent on IT and four trillion of that was spent on projects. Four trillion dollars is larger than the GDP of every country except the United States, China, Japan, India, and Germany. Four trillion dollars is three times last year's GDP of the United Kingdom. And if you consider 70% in the context of the four trillion dollars in project spending, that comes out to 2.8 trillion dollars. Think about that. It is a massive number. It is bigger than the GDP of most countries. It represents over 25% of the GDP of the United States. Moreover, IT spending is accelerating. In 2006 alone we will be spending close to 1 trillion a year, with five hundred billion of that being spent on projects. In short, if these poor performance problems don't start turning around, the cost of poor project management is going to start exceeding the entire output of countries. As to the underlying reasons for this ““ the second part of your question ““ it really comes down to the level of financial risk to the business. And as we start to see technology evolving, to the point that more and more technology gets woven into the fabric of everything around us, the demand for quality and performance is really going to force this issue once and for all. And the engineering techniques and disciplines will follow. They will simply have to. It is basic, capitalistic Darwinism. In short, we are entering a world of engineering that has increasingly greater potentials for massive financial loss due to poor delivery or lack of quality. Consequently, more of what we do in technology and software is going to have to be pre-assured in this new world. In the old world, when software was still in its infancy, it simply wasn't necessary. Innovation was enough. But in the future, delivery will be king. You are seeing this now when companies turn out products. If Apple turns out an iPod that has a bad battery, the stock goes down. If Microsoft turns out a new version of XP that has a bunch of security holes, the stock goes down. At one time it was enough simply to have the iPod or XP. But it isn't enough anymore. This transformation that we are talking about, the transformation of software into a real engineering discipline, is going to be driven by business forces. I've been at a lot of companies during the years, and a key question I always ask is, “What happens if your product breaks?” It is very rare that anyone will look at this question in the context of the stock price or the earnings. People generally don't make the food chain connection. However, in the new world, when technology becomes the product, we are going to start seeing more of this kind of thinking. More and more people are going to be asking themselves questions like, “How can I guarantee that I achieve my deadline and lower my risks and manage my financial loss.” And then they'll either get into the engineering and process discipline or they'll fall off the planet.

Biography of Dr. Howard Rubin

Dr. Howard A. Rubin is a Gartner Senior Advisor and Professor Emeritus of Computer Science at Hunter College of the City University of New York. He is a former Board member and Executive Vice President of META Group, Inc. and a former Nolan Norton Research Fellow. Because of his extensive work in worldwide technology data collection and benchmarking, Dr. Rubin was a member of the Global Information Economy (GIE) Working Group of the U.S. State Department's Advisory Committee on International Economic Policy (ACIEP). In 2001 Dr. Rubin was a PWC Outsourcing World Achievement Award finalist for his work on outsourcing benchmarking, metrics, and fluid contracting. In 2001, CIO Magazine recognized him as one of their top innovation “gurus”. In 1997, Industry Week named Dr. Rubin one of the top 50 “R&D Stars to Watch”, an individual whose achievements are shaping the future of our industrial culture and America's technology policy”. Through his product experience and research, Dr. Rubin has collected data and organized it into what may well be the world's largest information technology benchmarking and trend tracking IT and business database “” drawing on data gathered through a network of more than 30,000 professionals in about 10,000 companies covering 50 countries. He is also the developer of the Global Technology Index and State Technology Index that has been widely used by the United Nations (GTI) and US Senate (STI). Dr. Rubin is a prolific writer and author. He formerly had his own IT newsletter — “IT Metrics Strategies” — his own column in CIO Magazine — “Real Value” — and is the author of numerous books and articles on IT strategy, benchmarking, and related issues. He has been an area editor for IEEE, American Programmer, and other professional journals. Today, Dr. Rubin is focusing his research in the areas of competitive benchmarking, IT service catalogs, IT investment portfolio management, global technology/offshore strategy, a new balanced business “scorecard”, creating “merger ready” organizations (the IT “mothership”), and the development of “Network Age” economic indicators. His international work in global technology economics and the IT workforce has been the subject of briefings to the White House and heads of state in Canada, India, and the Philippines. Dr. Rubin's roster of benchmarking clients includes AIG, Alcoa, Altria, AOL/Time Warner, American Express, AT&T, Bertelsmann, British Airways, Cap Gemini/EY, Capital One, Deustche  Bank, JPMorganChase, CitiGroup, Del Laboratories, Fannie Mae, GE, GM, IBM Global Services, ING, Johnson & Johnson, Lehman Brothers, MetLife, Merrill Lynch, The MONY Group, Viacom/CBS, Verizon, Wachovia, Warner Music, Washington Mutual, and Young and Rubicam, among many others.

About Richard Wood

Richard Wood has been the publisher of CAI's Accelerating IT Success newsletter since its inception in 2011. A Marketing Major at Cal Poly Pomona he has been working with Computer Aid since 2001. He can be reached at richard_wood@compaid.com

Check Also

10 Mistakes to Avoid When Troubleshooting IT Problems

Troubleshooting a problem can be a pretty tense time in the heat of the moment. …

Leave a Reply

Your email address will not be published. Required fields are marked *