In case of bad results, we blame the execution.
My client had what we would certainly call a stretch goal. Actually, more like a Big Very Hairy Super-Audacious Goal, to co-opt Jim Collin’s language. His board had given an ultimatum. Grow the company’s annual recurring revenue (ARR) to $30m, or we will exercise our right to pull our investments.
The board set the goal, but the strategy and its KPIs came from my client, “David”, and his team. The team had decided to differentiate the product by adding an AI component. Not only would the product provide analytics for their specialized market, but its algorithm would anticipate client needs, and use the data to produce recommendations. Clients would be able to both see their “numbers” and what those numbers meant.
All of the market data suggested this would be a winner!
It was 12 weeks into the first year and results were not good. They should already be on track for $23m in ARR. Instead, they were barely ahead of last year, at $19m.
David called a meeting with the revenue team. He wanted to know why they weren’t closing more deals. “Sara”, head of revenue, said her team was working hard. She showed him the explosion in new leads form all of the prospecting the team had been doing. But David pointed to another number: the closing percentage. “You’re burning leads. Your team can’t close deals.”
Sara met with her reps. But they all said roughly the same thing. Prospects were interested at the beginning of the conversation. They were all shopping data platforms. But when they used the product and saw the roadmap they vanished. It seemed like everyone on the team was chasing prospects, and all the prospects were running for the hills.
Sara had heard this kind of whining before from other sellers. She knew that sales reps looked for excuses to explain their failures. “Stop blaming the product”, she said. “Close some deals.”
In the meantime, customer retention was also down.
The head of customer success was flummoxed. Customers who left just said they found something easier and cheaper. And, like the sales team, the retention team called and left voicemails, sent emails and hoped for return calls. But their clients were hedging at renewal time, and many were leaving before completing their contracts. Short of giving the rest of the contract to the client for free, they simply couldn’t seem to save these fleeing customers.
David was livid about his team’s ineffectual efforts. He was considering replacing the heads of both sales and customer success.
Productizing the Factory
When Elon Musk launched the Tesla Gigafactory, he was confident that the robotics-heavy plant would be a massive success, and that by using robotics instead of people, the company would produce 5000 Model 3s a month. In fact, the factory would be so incredible, that would become their flagship product. That is, the factory itself would be the headline —over and above the cars!
Throughout 2017, production of the Tesla Model 3s was delayed. Headlines counted the months that Tesla failed to hit its targets. Musk blamed bottlenecks in battery production.
Finally, in April of 2018, after another failed delivery milestone, he fell on his sword. The problem wasn’t batteries—it was over-reliance on automation. Ultimately, said Musk, “humans are underrated”.
Which Employee’s Fault Is It?
In both cases, although it seemed like a problem of execution, it wasn’t. The root cause was the strategy—the actual theory about how the company would accomplish its goals. But neither leader considered that in their assessments.
Musk eventually recognized that his theory about the efficiency of robotics was flawed –and that their product should be the cars not the factory. He recognized that replacing humans with robotics, while good to a point, was not a wholesale solution. It’s essential to have people who can manage and monitor the performance of machines. So he reverted. He shifted away from a fully automated manufacturing approach, and brought in old-fashioned human beings.
My client with the data platform spent several more months perseverating over his poor sales and high churn. Ultimately, I convinced him to look beyond the team. We did extensive customer and market surveying. The data was conclusive. Customers didn’t want or trust AI recommendations, and they hated the product’s complexity. The marketplace agreed. Both clients and prospects wanted and expected a straightforward, easy-to-use, analytics platform. The strategy –AI-enhancement—was the problem.
Blame the Team First (and Last)
It is second nature for leaders to assume someone on the team is at fault when results don’t come.
We even have idioms capturing it: It’s not the strategy, it’s the execution.
But, often, it’s not the execution, it’s the strategy.
There are examples through history. Whether Boeing CEO Muilenburg blaming pilots for 737 Max crashes or Charles the 1st who executed his advisor, the Earl of Strafford, for treason instead of looking to his own strategy.
Leaders inevitably look first to their team’s performance. And they may be wise. But, they also usually look ONLY at their team’s performance. But why?
We know that strategy is always theoretical. It’s always an experiment. Yet, when the results don’t pan out, leaders (including my founder clients) continue repeating the experiment, assuming it was done wrong. [click to tweet this]
Science isn’t immune to this either. In 1903 René Prosper Blondlot claimed to have discovered a new kind of radiation. He called them n-rays. You’ve never heard of them because they don’t exist. But Blondlot died believing they did, despite never being able to demonstrate it.
Are they Just Ego Maniacs?
Superficially, it seems like hubris. Leaders don’t have enough humility to consider the strategy is flawed. And indeed, that can be a factor.
But something more insidious is at play. Although we build strategy knowing it’s hypothetical– while planning and executing—we forget that.
Even the measurements designed to track progress presume the strategy’s validity. So, when we see indicators pointing to failure, we look only as far as that indicators’s measurement subject –the behavior. We don’t step back and look at the entire frame.
And of course, there is our old friend, confirmation bias. No evidence that could undermine our belief in the strategy can penetrate because we are sorting for data that confirms what we already think: that our strategy is brilliant.
There are ways we can interrupt this cycle.
- Validity Metrics: When we design the strategy and metrics, we can designate measurements that will point us toward questioning the strategy and considering a pivot. It is likely a ratio. For example, if a sales team’s closing rate goes down instead of up with a new strategy –and the team is largely the same—that is telling.
Another useful data point is long-time customers who are leaving unexpectedly. Unless the price has risen precipitously, customers usually stick with a vendor. Change is hard and disruptive. Choosing disruption over inertia only happens for a good reason.
- Quit Triggers: We can designate a point in time and result levels at which, if the strategy hasn’t borne fruit, we revisit the theoretical framework—and consider pivoting. Usually, you do this by specifying a small set of measurements as your indicator “cocktail”.
- Outside Info: We can pull in anonymous feedback from customers, employees, and advisors. But we have to be willing to take that input seriously. It’s easy to justify away its importance.
Using these ideas can help prevent a tragedy. Strategies are NOT truths. They are tests. So, look to the results and consider everything they are telling you. Even if it indicts the strategy.
Comments
Pingback: Too Close to the Sun - Strategy - Amie Devero
Francis A Wade
Great one – will share!
Francis Wade
I was just about to comment…great article…but I already had…back in 2023. Still a great one.