The world has become consumed with measurement. Most everyone I know has a smart watch, smart ring, pedometer, or something else that measures their bodily functions.
High-growth organizations exemplify that trend. At their best, metrics try to answer some basic questions that might help management adjust and optimize their work.
- Is the business moving in the right direction?
- Is the strategy valid?
- Are we executing the strategy well?
These break into two classes of measurements. Some are used similarly to health indicators in medicine—like blood pressure or cholesterol. Whether we are viewing the as investors or as managers, we use them ensure nothing is amiss —and expect them to fall within a predetermined “healthy” range. When they don’t —that is, when we see an anomaly—we examine further.
Those kinds of metrics usually address question (1). What direction are things going? They apply to most businesses. Revenue, profit, run-rate, etc.
Affirmative vs. Anomaly
Metrics become less generic in both medicine and business as you try to answer the second two questions. For a cancer patient, the oncologist performs regular tests to confirm the treatment is working. If the bloodwork isn’t moving in the right direction, she changes the treatment. That tests the strategy’s validity. The hypothesis was that drug x would slow growth. It hasn’t. The strategy is wrong.
Most strategic and performance metrics are like that. They’re intended to show that the strategy is valid, and execution is moving apace. To test execution, we measure the pace of initiatives. And we confirm our strategic theory when the progress of the initiatives correlates to more revenue, sales or customers. We are not looking for anomalies, but for expected trends.
The Cascade
In my own work with clients, we develop a strategy map and then we extrapolate a scorecard that measures its components. Those components are meant be inputs that ultimately result in the goals. As the scorecard “cascades” from the C-suite down the operational hierarchy, so do corresponding measurements.
Today, most organizations use some kind of cascade. It is an attempt to measure each rung of a cause-and-effect ladder. For example, one of my clients is a data marketplace. They make money whenever anyone buys or sells data on their platform. You can imagine a strategic chain of metrics testing this proposition: Sales reps contact customers more, customers attend more demos and discover new use cases, they transact more, revenue rises.

That chain is testable. If you track the number of customer calls, demos, and use cases, you will find a correlation –or not. Plus, individual activities feed perfectly into the chain. So, you can also expand the metrics to assess sales reps. You can measure each individual rep’s calls to customers and assess how many extra transactions result.
But despite how intuitive this seems; the logic falls apart in as many cases as it succeeds.
The Broken Cascade
A different client company (a large wearable technology manufacturer) also articulated their strategic themes and KPIs. They sent them to department heads who were to create departmental and individual metrics.
One OKR was to improve product comfort. Its KPI was the percentage of returned products.
One employee I coach, “Shawn”, leads a product team focused on several lines of wearables. He has 5 direct reports— each leading a team working on an aspect of the product. Those teams cover things like materials that touch skin, optics, sensors, containers, etc. Each team contributes to the comfort of devices.
Shawn is supposed to create KPIs down to the individual level. What should he measure? What aspect of individual performance is a direct input to the comfort KPI?
Should he measure how many materials each tries before succeeding? Or focus group responses to every prototype —and then divide the scores between engineers? Or instead, perhaps he should look to their individual contributions to meetings, or how fast they prototype?
Will any of those metrics reveal something germane to the KPI?

Measuring the teams (individually or in aggregate) is sensible. Those metrics could indicate progress, speed, and focus.
But what do you learn when you measure individual employees? That one is smart, another quiet, or that Andrea tried 10 formulations of plastic for the new wristband? Does any of that help you to calibrate the progress toward reductions in customer returns?
Whole vs Sum of Parts
Shawn is baffled. He realizes that the metrics he’s looking for should be predictive of whether or not the company will achieve the comfort KPI. He has data that can do that. But it is aggregate data. Not individual performance. No matter what individual trait or activity he measures, he can’t get there.
Imagine for a minute that you try to test a simpler metric. Platform stability as measured by uptime.
There may be engineers who produce excellent code. But what measurable aspect of their work is predictive of uptime? Speed of coding? Number of code reviews? Complexity of the code? You can measure everything, right down to a time and motion study minutiae. But what will you discern that’s predictive of the KPI?
Even if one or the other trait or skill is desirable, can it be correlated to platform stability? Probably not.
It’s easy to fall into the trap of thinking that measuring individual performance is the key to understanding overall success. But in complex interdependent systems, measuring individual performance might not reveal anything that matters.
Yes, we should hold teams and their leaders accountable for the team performance –and for the KPIs that indicate it. But when we drill down beyond that, we may be gathering irrelevant data and making employees jump through needless hoops.
This has a direct effect on employee engagement as well as being a kind of red herring for executives.
Bathwater, Not Baby
I am not suggesting that we shouldn’t measure individual performance. But maybe the way we do that isn’t through a strategy cascade. The effort obfuscates important information – from the team level.
So, how do you assess employees? Well, that question deserves its own article (or 10 volume book set).
But suffice it to say, that huge portions of strategy may not be testable at the individual level. We are all sometimes guilty of falling in love with our tools. And the OKR, KPI, scorecard, cascade, waterfall or whatever your favorite term is—it’s a great hammer. But it’s worth noting — everything is not a nail. In the cases where the cascade