Performance management

Numbers, local operations management and local operations

The local operational leader’s ”blind spot”

Performance management is about making the important measurable, not about making the measurable important

Numbers, local operations management and local operations

In recent decades, most Danish companies have successfully used the great advances in information technology for improving and extending their internal management information systems.

The below text is an extract – go to the top of this page to download the entire article.

This is mainly a positive development which has generally made it much easier for the individual leader to access large amounts of information, thus strengthening the leader’s basis for decision-making.

However, it may seem that, in the pursuit of numbers, leaders sometimes forget about the good old rule of thumb:

Performance management is about making the important measurable, not about making the measurable important!

The point is firstly that what provides the leader with a solid basis for decisionmaking is not the very access to large amounts of data, but rather the focus on the small amount of essential information – based on a sound, valid business understanding. Secondly, the point is that the small amount of essential information should also be available in a reliable form to provide the leader with a solid basis for decision-making.

This may seem evident, but in our experience, quite many organisations are still facing challenges – not least when we look at the connection between the goals in the management information systems and the operations management and operations optimisation performed by the local operational leader. We experience that some local operational leaders have a kind of ”blind spot”, meaning that their ”visibility” into the different systems is restricted by a lack of validity and reliability in the underlying numbers.

The challenge becomes even more interesting when considering the fact that the company and, not least, the local operational leader can achieve a lot by using a little common sense. In this article, we will share our view of this common sense by briefly introducing – in the first part of the article – a couple of central principles and models and subsequently – in the last part of the article – discussing the practical application of these. We have made a point of supporting both the principles and their application by practical and illustrative examples.

Thomas Lauridsen
Thomas Lauridsen
+45 4138 0018

The principles of validity and reliability originate from Roger Martin, who has written a number of excellent contributions on this subject, e.g. the article ”Validity vs. Reliability: Implications for Management”1. In the article, Roger Martin emphasises that it is BOTH about creating room for validity represented by the subjective business understanding in the form of the right ”gut feeling” AND for reliability represented by the objective, number-based and ”demonstrable” measurement and control systems that help us create predictability and plan the daily work.

The lack of balance will weaken a purely validity-driven company around the rational control mechanisms such as solid budgetary and financial control tools, quality management systems, resource planning systems and the related forecasting tools. On the other hand, a purely reliability-driven company will typically underestimate the importance of creativity, client intimacy, robust strategic focus, creation of a meaningful job content as well as the holistic importance of running a successful company.

The advantage of a balanced approach between validity and reliability is that the two principles actually supplement each other and render each other’s pitfalls visible. This enables the company to better avoid the pitfalls before it is too late.

Consequently, the principles concerning validity and reliability are of great relevance in relation to how the company’s leadership team is composed, how management information systems are built etc. The challenge is that we are dealing with opposing mechanisms which are difficult to balance as they tend to work at cross-purposes.

If we give it some thought, most of us will be able to find examples of Roger Martin’s points about validity and reliability in our own organisations, even all the way down to the operational level. This also holds true for the below case example from a large Danish service organisation.

Validity and reliability in a large Danish service organisation

One of the central KPIs in management reporting of a large Danish service organisation showed that the delivery times to the client for a given service were 1-2 working days on average, which met the success criteria of a maximum of 3 working days. However, the organisation at the same time experienced a number of client complaints, indicating that not all the clients were completely satisfied with the delivery times.

The leaders were surprised to learn this – especially considering the ”demonstrably” short delivery times. A manual data collection over a 9-week period revealed that the actual average was rather 3-4 working days and that approx. 20% of the clients waited for more than 10 working days. The difference between the management information system and the client’s actual experience of the delivery times could be traced back to a number of specific causes, including:

  • Too narrow control focus on averages
  • An amount of internal rework was not counted in
  • There was a practice of not creating the case immediately
  • Problem cases were opened and reopened between departments
  • Time was gained through temporary solutions

These observations immediately gave occasion for discussions about how to make the measurement system more reliable, which resulted in a number of proposed complex and expensive IT-based solutions. However, the discussions about the reliability of the measurements soon died down as the data collection also revealed that the client’s actual need was linked with an issue of on-time delivery in connection with month-end closing.

This was due to the fact that late delivery might cost the client a good deal of money, whereas early delivery did not have any significant financial effect for the client. The speed of delivery was, thus, secondary to the client compared to on-time delivery. The employees in the local operations department had long before discovered this client need, and, consequently, tried to the extent possible to ”empty the shelves” towards the end of the month, which resulted in the fact that some cases – often unproblematically – were left on the shelves from the beginning of the month for up to a couple of weeks before being processed.

However, there were frequently a number of cases that could not be closed on time due to late start-up, late arrival, lack of information, poor control and other challenges which gave rise to the experienced number of client complaints. The discussions made it clear that the client’s experience of on-time delivery could quite simply be controlled and improved by keeping an eye on and reducing the number of non-closed cases (”empty shelves”) in the days right before month-end closing.

The case illustrates in its own and quite simple manner how uncritical handling of data at different times during the process gave rise to a number of very different leadership conclusions, such as:

The numbers are right, so the client is wrong (”the blind spot”)

The above text is an extract – go to the top of this page to download the entire article.

Download the article in other languages

Download the article in Danish

Download the article in Swedish