While working on a client engagement, I proposed to a manager that we setup a dashboard for her and the team to reflect on the department’s performance and systematise improvement efforts in the unit. We defined the key performance metrics and populated the dashboard with the measurements collected during my support visits. But as time progressed, the activity fell flat.
This was because the required data was either incomplete or totally unrecorded, despite the agreement from the Unit Managers, key process stakeholders, who were looking to see improvements, to provide this data. It was puzzling because the Unit Managers stood to benefit: this data would help track service quality to their units and, at the time, they did not disagree or raise concerns about supplying this data.
I had seen this pattern repeat itself over a number of clients and it really got me wondering: “How and what are managers managing when they don’t have visibility of their processes?”
As I was still struggling with this question, I found myself blurting to the manager that I was working with: “What is the difference between doing the work and managing the work?” The question popped before I could even construct my potential response, in case the manager put it back to me. From her response it was clear that this was something to which she hadn’t given much thought. The question was posed back to me. Of course, my response may not be the most comprehensive answer, since it is primarily influenced by my role and experience as an improvement practitioner.
Doing the work results in the transformation of something (physical or non-physical). It is made up of actions that take something from an initial condition and transform it into a modified condition that is beneficial to the next person in the value chain. The recipient is either an internal customer or the external client who pays for the service or product. On the other hand, managing the work is concerned with transforming the transformation process in order to improve it (best case scenario). In other cases managing the work focuses on maintaining the transformation process by ensuring that it is in control. Those who do the work have a direct impact on the customers and those who manage the work have an indirect impact on customers, through their direct impact on those who do the work.
It is generally easy to see the tools/methods used by those who do the work, since they are directly used in the process of service delivery. What about those who manage the work? What tools do they use and how do they improve customer service? In many cases those who manage the work use management theory, coupled with data and the facts about their operations. This may be the reason why I was puzzled when the Unit Managers did not support the initiative to collect the requested data. I do see that the Unit Managers and many other managers work with data and some may already be overwhelmed by the amount of data and reporting that they have to do. Perhaps that’s why they felt discouraged to take on yet another reporting request.
Although those that do/manage the work are collecting lots of other data, they are not collecting data to drive and support improvement. Based on my observations, it seems that most of the collected data is control data, and measurements are used as lag indicator metrics. The control data that I have seen managers focusing on includes: shift rosters, staff clocking data, staff leave data, registers for handovers/receiving, stock and orders. In many cases the metrics used are reporting on the past and usually don’t provide deeper insight into what to improve and how to improve it. The common metrics include: units completed, number of defects, number outstanding/late, efficiency ratios, etc. Of course these metrics are important, but the gap is that they are usually not used to track performance trends and drive proactive corrective actions. If they are used for improvement, it is usually localised improvements, instead of systemic improvements.
Collecting lots of data will not necessarily lead to improvement, so this means we should take the words of W. Edwards Deming to heart when we setup our dashboards.
I selected the quotations in this article to highlight that although it is possible to measure many things, it is not necessary to measure everything. But how does one narrow and focus on key metrics? My proposal is to use at least 3 metrics that provide you with a summary assessment of how your team/department is performing from a systems thinking perspective.
The open systems model (Katz and Kahn version shown in the figure below) gives us a general framework that can help the assessment of how an organisation is performing in relation to its goals. What I like about this depiction of an open system is that it makes it possible to see that the output of the system should be seen within the context of the external environment. In the case of defining metrics, this means our metric should focus on our performance in relation to the promise made to the external environment. So our first and key metric should be how we are performing in relation to the service level agreement (SLA) and therefore the customers’ experience of the organisation.
If we are not a monopoly, how we perform in relation to our customers’ expectation will impact the demand for our services. Hence the next key metric should be the customer demand. This metric provides us with two insights. The first is whether our key assumptions about what customers expect from us were correct or incomplete. If we are meeting our SLA and yet the demand is not growing it means there may be other things in the environment which we don’t understand (assuming the economic cycle has not changed). The next insight from this metric is whether or not we have sufficient capacity to meet the demand without compromising the SLA. Plans can be made to safeguard the SLA while meeting the growing or variable demand.
The third metric should come from the operations of the team/department. This is probably where the question of where to focus comes in, since there are many variables that can be measured. An understanding of systems theory is very helpful here because it is from this that we recognise that the performance of a system is determined by the performance of the system’s constraint. The terms ‘bottleneck’ and ‘constraint’ are sometimes used interchangeably but it is very important to know that they mean different things, and understanding the difference is fundamental to managing systems (a topic for another day perhaps).
So our third metric should inform us about the performance and capacity of the system’s constraint. In particular, we want to look at the relationship between the customer demand and the constraint’s capacity. If the constraint is unable to manage the demand we need actions to close this gap. In addition, if the performance of the constraint is dependent on multiple conditions, we would need to increase our metrics to include those conditions. This is because anything that compromises the constraint, compromises the performance of the system. Therefore, at minimum, our dashboard will have three key metrics: Performance against SLA, Customer demand and Performance of the constraint. The need for additional metrics will be determined by constraint’s relationship with other components of the system.
In this age when people are talking about big data, and dashboards are becoming the hype, it may be easy to get lost in the details. I hope this article will provide some food for thought for those who are feeling overwhelmed by the demands of measurements everywhere. It’s important that we remember that organisations are systems and how we use metrics should reflect this understanding. As a manager, what are you measuring and how are you finding data collection?