6. Performance Analysis

6.1 Why analyze performance?

Performance analysis is big business. It provides incomes for consulting firms, accountants, and statisticians. It generates weighty reports, which tell us that Victoria is doing much better than New South Wales in terms of hospital waiting lists, and that Queensland's administrative cost ratio in public housing is lower than in any other state.(19)

Before we look at the meaning of performance indicators, we should ask why they are necessary. Who uses them and for what purpose?

At a macro level, there is a view that the economic world can be divided into two categories. One category is the private sector, where competitive conditions prevail. There, Adam Smith's "invisible hand" will ensure that the market produces just those goods and services which consumers want, and that they will produce them at least possible cost. The other category is the public sector, operating in areas where there is less than perfect competition. Because it enjoys the power to enforce consumption and to collect taxes, and because many of its enterprises are natural monopolies, it will not be kept in check by the invisible hand. It will be prone to supplying goods and services people do not necessarily want, to over-supplying some gods and services, to inefficient production, and to poor quality and responsiveness. The disciplines of public accountability backed up by performance measurement must substitute for Smith's invisible hand.

The question remains - who uses performance data and what meaning is given to it? Do the public carefully consider the meaning of the data, and are there rewards attached to good performance? Perhaps it can be useful, but, like all data, it must be interpreted intelligently.

At a micro level, perhaps it is of most use in helping managers make decisions. It helps inform their judgement. But it does not substitute for informed judgement. Too often what starts as a management information system becomes, by neglect, a management control system.



Information, not control

The difference is far from semantic. A control system takes over from the human element altogether. In certain cases this is functional. The simple thermostat which controls your room heater or air conditioner is a case in point. It is a typical (if simple) engineering servomechanism, which senses the room temperature, compares it with a set standard, and using a simple binary choice turns the heater or air conditioner on or off. It generally does a far better job at regulating temperature than a human agent can.

Where outputs can be easily defined and where mechanical systems can be designed servomechanisms can and do work reasonably well. But even when they do there is often the human ready to take over in case the autopilot takes the plane into a thunderstorm, or if something arises for which the automatic control system was not programmed.

In more complex systems the human relies on a number of inputs, some quantitative, some qualitative, and in a process little understood brings these inputs together to make an informed judgement. The car driver uses numerical data (speedometer), visual estimation (the speed of the merging car), psychological judgement (the other driver's lack of eye contact) and brings all these, and more, elements together in a continuous process of feedback and adjustment.

The prudent investor buying shares will look at the corporate accounts - profit and loss statements and balance sheets. But she will know that this data, while being objective, is limited, constrained by the conventions of accounting. She will seek other data - the firm's reputation, the integrity of its managers, its market risk etc.

One particular bias in managerial decision-making is for information which is quantifiable to carry more weight than information which cannot be quantified.(20) Conversely, those who ignore quantifiable data do so at their own peril. The world is replete with examples of boards and management committees who have relegated numerical information to the "too hard" basket, or have left it all to the accountants and financial specialists.

No matter how good the management information systems are in a technical sense, they will be useless unless data in interpreted as information and the information is used. That may appear to be self-evident, but data carries little benefit unless people use it and reflect on it.

 

From data to information

We often carry around an image of managers as the decision-makers in the organization. They receive the data, process that into information, and issue directions to keep the organization on track.(21)

The trouble with such a model is that it ignores the hundreds of decisions people at all levels make every day, which will have some influence on the organization's performance. For example, the managers in a state hospital regularly received reports which suggested that their expenditure on surgical supplies in the wards was above the level they should have expected from comparison with other hospitals. Based on this benchmark information they issued written instructions to all the ward staff to take more care - the usual management memo on cost control. The problem persisted, however, until one day a manager noticed a nurse mopping up some spilled food with a surgical wipe. She asked the nurse if she knew what the hospital paid for the wipes. "About ten cents, I guess." was the reply. When the nurse was told that the price was more like three dollars the nurse's behavior soon changed. The hospital's decision was to put price tags, similar to supermarket price tags, on the shelves where the ward supplies were kept. That simple measure, of sharing the data with those who made the day-to-day decisions, was far more effective than all the memos and directives.

 

Obsession with trivia

One problem in organizations, particularly government bureaucracies, is an uneven attention to different cost and revenue elements.

Typically, salaries and wages make up around 70 percent of the recurrent costs in a bureaucracy. Although they are the major expense, they tend to fade into the background, because the payroll systems are largely automated. A local government with 100 staff does not have the financial controller raising 100 orders, 100 authorities to pay and 100 checks to be signed by the financial delegate every two weeks. The payroll will be handled with an automatic payroll system, which will rarely come to the attention of senior management. The only time labor costs will come to the attention of senior management is when there is some variation in awards, or an enterprise agreement. Similarly with other recurrent expenditure, such as rent and utilities, which typically comprise another 10 to 20 percent of recurrent costs. The vast majority (80 to 90 percent) of expenditure having been automated, the focus is left on those non-routine transactions, such as travel, minor capital acquisitions etc.

Sometimes the aggregation of small amounts into one large bill will cause an undue reaction. Each staff member may use only two dollars worth of stationery a day, but when this is aggregated and consolidated into a monthly bill for a hundred staff management sees a stationery bill of $4000, and sends off a missive to write on both sides of the paper.

The converse of attention to trivia is inattention to large cost items, particularly labour. A large department in Canberra, in response to the theft of two computers, instituted a rule that all visitors had to be accompanied to and from the front desk by a staff member. There was no costing on this decision. But what is the cost of two hundred people, spending, say, an extra ten minutes a day, walking to and from the front desk? (Around $400 000 a year!(22) ) The decision may have been justifiable in terms of security, but the point is that the analysis was not done. The organization had a sophisticated management information system, but no culture of cost consciousness. Staff were considered a "sunk" cost.

Whatever data is generated will be of little use unless it is backed up by a culture of cost-consciousness. In government business enterprises and in areas subject to user charges there also needs to be a culture of revenue consciousness. The swimming pool attendant who is rude or indifferent to customers needs to understand how much is lost through his behavior. In short, the organization needs a culture of "value added", a continuous questioning "what does this cost?" and "is it providing commensurate benefit for the customers?".


6.2 Efficiency and effectiveness - the basic elements

IndicatorsWhen it comes to measuring performance, there are two basic types of indicator, efficiency and effectiveness.

Indicators on their own carry little meaning, unless they are linked to some normative standard, or benchmark. Knowing that our bus service had only three percent of services arriving more than two minutes late, in itself, is of little use unless we have a standard by which to interpret this data.

Standards can come from two sources - historical data from the same agency or data from comparative agencies. Historical data, if it contains financial information, will have to be brought to constant prices using an appropriate deflator (e.g. consumer price index, hourly labor cost) before it can yield meaningful information.

 

Converting to constant prices

The technique of converting historical data to constant prices is simple. All one needs are the relevant statistics and a computer spreadsheet or pocket calculator.

The basic trouble with using historical financial data is the value of currency changes over time. A dollar in 1985 bought much more than a dollar does in 1997. In some senses, a 1985 dollar is another currency, as unlike a 1997 dollar as a US dollar is from an Australian dollar. What we need to bring these historical dollars to today's terms is an exchange rate. That exchange rate we can find by using index numbers.

It is easiest, perhaps, to illustrate by an example. Suppose we are providing a service, and have kept historical records of the direct cost of providing that service, every two years since 1984-85. We have a historical record as shown below:

 

Year Services provided Direct cost Direct cost per service
84-85 56 006 428 850 7.66
86-87 50 025 443 233 8.86
88-89 55 910 463 862 8.30
90-91 55 563 487 875 8.78
92-93 56 099 501 823 8.95
94-95 54 184 523 107 9.65
96-97 50 527 544 116 10.77



The trouble with this data is that it is all in current prices. It is about as useful as data presented in seven different currencies, or data in one foreign currency with seven different exchange rates. What we need is a way of bringing all this to some meaningful comparative data, preferably 1996-97 dollars (the latest year of the data).

We can generate those exchange rates using index numbers. In this case, for illustration, we will use the consumer price index, available from the Australian Bureau of Statistics but we can use any number of series to do our conversion. (For example, for a labor-intensive activity, we might use a labour cost index.)

Year Consumer Price Index (Base 1990-91 =100)
84-85 67.8
86-87 80.4
88-89 92.6
90-91 105.3
92-93 108.4
94-95 113.9
96-97 120.3


What this means is that a "basket" of goods and services which would have cost $67.80 in 1984-85 dollars would have cost $80.40 in 1986-87 dollars, $92.60 in 1988-89 dollars, and so on up to $120.30 in 1996-97 dollars. We can use this data to derive a set of "exchange rates" to bring the data to constant prices. For example, the "exchange rate" to convert from 1984-85 dollars to 1996-97 dollars is 120.3/67.8, or 1.77. The "exchange rate" to convert from 1986-87 dollars to 1996-97 dollars is 120.3/80.4, or 1.50. And so on. We can therefore bring this series to constant prices

The completed table is below. We can see that the cost per service has fallen in real terms, from around $13 to around $10. This looks like a once off productivity gain, perhaps of the type which occurs with introduction of new technology.

Direct cost per service current prices Consumer Price Index (Base 1990-91 =100) "Exchange Rate" to 1996-97 $ Direct cost per service 1996-97 $
7.66 67.8 1.77 13.59
8.86 80.4 1.50 13.26
8.30 92.6 1.30 10.78
8.78 105.3 1.14 10.03
8.95 108.4 1.11 9.93
9.65 113.9 1.06 10.20
10.77 120.3 1.00 10.77


Efficiency indicators - interpreting historical changes

The notion of efficiency sounds simple enough - output divided by input. But what does it mean? Apart from the inflation effect referred to above, there can be many interpretations. For example, the unit cost of a service may fall because of increases in scale economies. This is a legitimate efficiency gain, but it hardly justifies a celebration of sound management if it has occurred simply because of growth in use of our service resulting from population growth.

If efficiency is measured by dollars in the numerator and denominator, then gains may come about because of increases in output prices. For example, an electricity authority may report an increase in sales per dollar of expenses from $1.10 to $1.15, but this may have come about as a result of monopoly pricing.

One approach to this problem is to use physical units of output as an indicator - say KwH for electricity generated. That overcomes the problem of pricing, but such an approach is possible only in an entity with a reasonably homogeneous output. A similar measure for the city's parking inspectors may simply reflect a tendency to go for the easier cases - perhaps those closest to the depot.

As a case in point, in one Australian city traders in small shopping center were complaining there was insufficient attention from parking inspectors. The inspectors responded that on a cost-benefit approach, for a given effort they could book more cars in city car parks close to their depots. Therefore, it would be a waste of resources to book, say, 2 cars in a suburban shopping center, while for the same effort they could book 5 cars downtown. The written objective of the agency, however, was to facilitate orderly parking throughout the city, but this objective had been displaced by the performance indicator. A performance indicator is not an objective.

Another distortion in performance indicators comes when input prices change. An agency may do nothing different physically, but because of a fall in the price of a key input its apparent efficiency may improve. An electricity authority may find the price of raw energy (e.g. natural gas) falls. This leads to a fall in input costs, but it hardly represents an efficiency gain. A fall in wages may result in lower input costs, but it does not result in any improvement in labor productivity.

One way around such a problem is to express productivity in terms of physical outputs and inputs. This leads to measures such as KwH per employee. Such measures are fine if there is essentially one input, but in capital intensive industries it may result in distorted figures; if capital displaces labor then the labor productivity will rise while the capital productivity will fall. If some functions are contracted out, then apparent labour productivity may rise, but all that's happened is that some labor has been transferred to the contractor and removed from the agency's denominator. A further refinement is known as use of total factor productivity, but this is subject to assignment of arbitrary weights to factors of production.


Interpreting comparative performance data

If problems of historical performance data are overwhelming, what is the scope for interpreting data between different agencies? For example, the table below shows comparative performance data for Australia's urban buses.

Australian Bus Performance indicators - 1995
  Adelaide Brisbane Canberra Perth Sydney
Derived Performance Indicators          
Km Per Vehicle 50 431 49 777 46 966 51 752 43 585
Boardings Per Vehicle ('000) 62 74 54 51 89
Accidents Per Vehicle 1.3 3.7 0.5   1.8
Boardings Per Person in City 46 72 79 38 75
Boardings Per Person in Catchment 48 72 79 38 75
Boardings Per Employee 25 587 32 728 29 024 27 786 41 845
Revenue Per Employee 1 902 35 543 25 718 28 531 66 758
Expenses Per Employee 73 209 68 605 86 346 77 670 70 823
Deficit Per Employee 71 307 33 062 60 628 49 139 4 065
Revenue Per Passenger $0.07 $1.09 $0.89 $1.03 $1.60
Expenses Per Passenger $2.86 $2.10 $2.97 $2.80 $1.69
Deficit Per Passenger $2.79 $1.01 $2.09 $1.77 $0.10
Deficit Per Person in City $128 $62 $164 $68 $4
Admin/Total Expenses 13% 9% 4% 10% 7%
Sick Days/Staff 8 14     11
           
Source: Derived from Australian City Transit Association City Transit Year Book 1996  


Meaningful interpretation is difficult. There are accounting differences; Sydney counts its subsidies for community service obligations as "revenue"; other states do not. Sydney has a comprehensive urban rail system; other cities have less developed rail systems. Sydney's buses have many short hauls through busy streets, often as distributors to and from railway stations; other cities have longer haul routes. As Professor Bob Walker of the University of NSW has said, "benchmarking is fine when there is a world standard urban geography".

And what meaning can be attributed to indicators such as "Km per vehicle". Is a rise in this indicator good or bad? At first sight we would assume it is positive, but what if it is achieved through cutting back on the difficult tasks, restricting bus services to routes where a higher average speed can be maintained.

Some financial performance indicators may be influenced by cost shifting or creaming. For example, a library may achieve scale economies through central location, but most efficiency indicators will not measure the extra travel costs which have been shifted on to the clients. Creaming refers to the practice of taking the easy jobs, leaving the harder jobs to other work units, or perhaps not doing them at all.

 

Effectiveness indicators

Effectiveness indicators are even harder to interpret. To what factors does one attribute an improvement in outcomes? For example, Australia's road toll has been falling over the years. Is this due to safer cars, better roads, more effective policing, changes in drink-drive laws? There are statistical techniques which can go some way to isolating various contributory factors (multivariant analysis), but they do not yield very robust measures and they are not practical as day-to-day management tools.

Indicators which measure client satisfaction are useful, but these have their limitations. It is hard to survey dissatisfied clients who have stopped using a service. A municipal library may survey its customers and find high satisfaction, but such information is not very useful if the survey excludes non-users. For example, a survey of current users' satisfaction with opening hours is almost useless.

Sometimes positive satisfaction may result from people not knowing that performance could be better. Legend has it that East Germans were very satisfied with the locally made car, the Trabbant. At least they were until they had exposure to the Volkswagens, Mercedes and Audis from the west. The Trabbant factor is alive and well in relation to government services. If users have been conditioned to generations of mediocre service, and are not aware of the possibilities of improvement, they will be satisfied.

Sometimes an organization will have conflicting objectives. A water supply authority, for example, may have both commercial and conservation objectives. Commercial objectives may indicate a need for expansion to achieve scale economies. Conservation objectives may point in the opposite direction.

So long as managers, staff and other stakeholders know the limitations of performance indicators, and use them to point to the need for further inquiry, they are useful. Once they start to drive the organization, however, they can promote behavior which is against the interests of the organization and its clients.

 

6.3 Management information systems

No discussion of performance analysis is complete without some reference to management information systems.

There is a view of management information systems as big systems, which assemble all available information in one "control room". The manager can interrogate the system in any way he or she chooses - compare this year with last year, plot a trend line, provide a cash projection.

Such systems can rapidly overload a manager with data. They are the on-line versions of the old half meter thick folders of computer printouts.

Four sound principles should govern the preparation of management information.

The first is the notion of a hierarchical arrangement of information. Most packages allow for such an arrangement - data by program level, which can be broken down to sub-program, to element etc.

The second principle, not so well adhered to, is that of management by exception. The principle of management by exception is "don't tell me when everything is going to plan; tell me only of the variances".

That does not mean highlighting only the bad news, such as cost overruns or revenue shortfalls. It also means highlighting situations where expectations have been exceeded in a positive sense.

The third principle is that the system should be capable of learning. If, for example, there is seasonality in a regular item of expenditure (e.g. park maintenance or power for street lighting) then it should be able to learn from past patterns, so that it does not report a high use of lawn mowing in spring or a low use of electricity in summer as a variation from budget.

The fourth principle is about the time frame of data. It is very easy to manipulate data to give impressive short term results. Skimping on maintenance, poaching staff, cost shifting, overworking staff - all these techniques are available to the manager who wants to generate impressive short-term performance information. The systems themselves cannot shoulder the blame for short-termism, but good system design can help managers take a longer term view.

 

Standard cost systems

Some organizations have highly formalized management information systems which are known as standard cost systems. These are systems which attempt to integrate financial and management accounting, standard costs being essentially a normative cost which managers should reach. For example, in a factory making auto components a hub cap may have a standard direct cost of $5.14, being $3.58 of materials, plus 0.05 hours of labor at a standard rate of $31.20 an hour, coming to $1.56. These would all be measured carefully by work study engineers, using precise measures of standard material use, and standard production time measured with stopwatches or micromotion studies. If the work team works a bit harder, under the incentive of piece work, and produces more hub caps, there is a credit to the factory account; if there is steel wastage there is a debit.

These debits and credits, or variances, are the instruments of cost control - in accordance with the philosophy of management by exception. In some factories huge computing capacity may be devoted to production of variance reports, and weekly meetings, up to half a day, may try to trace the reasons for the variances. There are attempts to shift blame ("We didn't waste steel, purchasing is to blame for buying from a lousy supplier, and we had to waste ten percent of it - debit it to their variance account") or to claim credit, and often there is little attempt to fix the problems, or to track the real causes for the positive variances. And, of course, no-one looks at the time wasted in these managerial squabbles; these are all 'managerial overheads', which are not controlled on a day to day basis.

Standard cost systems are suitable for organizations with a very homogeneous output. They may be suitable for some business units within a local government. Like all big systems, they are often inflexible and they need care in interpretation.

 

Framing and biases

Managerial behavior is likely to be influenced by the way in which information is framed. The same data can be presented in two different ways, with the possibility of two different interpretations.

For example, the statements "95 percent of the 60 food service outlets met hygiene standards" and "3 of the 60 food service outlets did not meet hygiene standards" are identical in content, but are likely to evoke different reactions, because one is positively framed and the other is negatively framed.

There are also biases in interpretation of data. In spite of what we know about statistical probability, we often interpret random events as having some underlying pattern. In his study of managerial decision-making, Max Bazerman identifies a number of biases.(23) Three of those biases are relevant in use of structured information.

 

(1) Misjudgement of the probability of random events.

Consider the following exercise. In five sequential tosses of a coin, rank the probability of the following sequences:

(A) H T H H T

(B) H T H T H

(C) H H H H H

Most people judge C to be least likely, followed by B and A. That is because C has more pattern than B, which has more pattern than A. But all can occur with 1/32 probability in the toss of five coins.

Two examples illustrate the policy consequences of this bias. If motor vehicle accidents are randomly distributed some intersections will get two or three accidents. This does not necessarily indicate a particular problem. Some workers will be genuinely sick on two successive Mondays.


(2) Regression to the mean

We are probably familiar with the notion that encouragement of good performance is more effective in motivation than discouragement of poor performance. But in a carefully controlled experiment, some Air Force flight instructors found that praise for an exceptionally smooth landing was typically followed by a poorer landing on the next try, while harsh criticism after a rough landing was usually followed by an improvement on the next try. They concluded that verbal rewards are detrimental to learning, while verbal punishments are beneficial.

How do we explain this departure from what we know about educational psychology?

The answer lies in the normal spread of outcomes in any human endeavour. Performance in many endeavours is often randomly distributed around a mean. Exceptionally bad or exceptionally good performance are outliers, and are likely to be followed by more normal performance. (Think of the times you have had good or bad luck in some sporting endeavour.)

The problem in organizations is that managers may over-react to chance results. A manager who has had good performance in one period, thanks to good luck, will know that in all probability things will revert to normal in the next period. Expectations will not be met. Companies with very good profit performance will often declare a "special dividend", using some excuse such as a 50th anniversary. They do not increase their ordinary dividend because they don't want to raise expectations of a repeat performance.

(3) Conditioned perception

Look at the following:

How many members of each species did Adam take with him on the Ark? (Note that the question is how many members of each species, rather than how many species.)

 

BIRD
IN THE
THE HAND


Most people miss the errors in the above pieces of writing. That is understandable; we are very efficient at filtering out unnecessary information and getting to the core of a piece of information. That's why, in proof reading, we often miss errors in the greeting of a letter, for instance.

In terms of management information, it's particularly important that it be presented in such a way that the unusual is highlighted and the routine is downplayed. Otherwise people will disregard important information, like they disregard a fire alarm which goes off too many times. Again, that's an illustration of the importance of management by exception.


6.4 Decision support systems

Most formal management information systems are about presenting historical information, or projections of established trends.

Of more importance, particularly when decisions have to be made on commitment to projects or other capital expenditure, is the availability of adequate information on which to support a decision.

Info ben/costWe can never have all the information we would like to make a decision about a project. At some stage we have to limit our research and decide to decide. In economists' terms, the marginal cost of extra information starts to outweigh the marginal benefit of the extra information.

Usually the research on a project is done by specialists, sometimes internal, sometimes external consultants. Quite often they will present a report, with an "executive summary" at the front. (Whatever the adjective "executive" means in this context!) This summary will have a bottom line recommendation, and further examination of the report will reveal in-depth analysis pointing to the preferred option, with cursory dismissal of other options. It will usually be presented to decision-makers a day or so before a meeting, along with hundreds of other pages of agenda items.

What such research is often lacking is sensitivity analysis. Sensitivity analysis is an acceptance of the fact that we cannot predict the future, that we cannot do comprehensive research, and that we have to make wide estimates on certain variables. For example, a project proposal may have a sensitivity table of net present values.

Net Present Values of Options ($m)

  Option A Option B
Base case (revenue $4 m pa, discount rate 6%) 12.2 9.7
Revenue projections exceeded by 10% 16.8 11.2
Revenue shortfall of 10% 6.1 8.8
Discount rate 8% 7.2 8.3

Presented with such a table, the decision-maker may ask "yes, but what's your bottom line". The point is, there's no bottom line. The decision-maker is presented with an honest range of uncertainty. In the example above, Option A looks better if all goes as predicted, but it is a riskier venture. Option B may be less profitable, but it is safer. The onus of decision is put back on the decision-maker.

Even more responsibility can be placed on the decision-maker with a dynamic computer model, in which the decision-maker can alter key variables and develop scenarios. Such models are easy to generate in spreadsheets such as Lotus or Excel.

These innovations do not relieve the researcher of responsibility to do sound research, and, because they may involve some computer modelling they can involve more research effort than a simple take-it-or-leave-it report. Their benefit is that they bring more responsibility on to the decision-maker, who has the opportunity to learn more about the proposal, to get a feeling for the key variables.

In general, in any project involving outlay and revenue, the most important variable is the revenue. Outlays can be forecast reasonably well, and variations in discount rates are not large. The big unknown is usually revenue, and that should have the widest sensitivity range.


Exercises

1. How might you measure the efficiency of

a photocopy machine?

a lawn mowing service?

a trash collection service?

the council staff who prepare policy advice?

a meals on wheels agency?

the municipal library?



2. For your own agency or work unit, think of as many ways as possible of generating impressive short-term performance data, while failing to meet your basic objectives or destroying long-term value.

 

Notes

19. Steering Committee for the Review of Commonwealth/State Service Provision Report on Government Service Provision (AGPS 1997)

20. For an excellent description of this phenomenon, see Peter Reuter "The Social Costs of the Demand for Quantification" Journal of Policy Analysis and Management Summer 1986.

21. Many management texts still refer to control as one of the four functions of management, the others being planning, organizing and directing.

22. That is, 200 people x 200 days, x $60 per hour x 1/6 hour per day.

23. Max Bazerman Judgement in Managerial Decision-Making (wiley 1986)

Go to next chapter