AI in strategy design: redefining tactics and strategy

This post is the first in a series about AI in strategy design. We’ll start by laying some foundations.

Specifically, I want to talk about a particular bugbear of mine: the difference between tactics and strategy as commonly understood. I’ll argue for what I think are more useful definitions, only in part because I’m pedantic and like to argue semantics. What’s perhaps more interesting for the reader is that these definitions allow me to introduce the concepts of subjective and intersubjective uncertainty in strategy, which are helpful in exploring how to design strategy for AI and indeed, vice versa, how AI can and cannot design strategy!

“That’s not strategy, that’s tactics!” a CEO levelled at me a few years back, visibly worked up. I had given an example of how a tech team could go about designing a strategy for a concrete, but yet-to-be-fully-framed, problem. I did not retort at the time, but it stuck with me. His point of view was, I think, that his business problems were strategic in nature, but what I had described was a matter of execution. That is, ‘strategy’ is reserved for the top-level goals and plans, and tactical lies at the other extreme of that dimension. Examples of this meaning are numerous. I interpret a similar meaning in Cutler’s post on fluffiness and ambiguity; it’s quite literally used as a dimension there. Chess is another example: a tactic is a move or short sequence of moves that force an advantageous position, while strategy is reserved for the longer term and more uncertain plan for winning the game. This is the common use and as a matter of definition I cannot really argue against it.

However, and I cannot help it, it irks me for two reasons. Firstly, to me this looks like a Sorites paradox: when does a strategy turn into tactics exactly? I personally do not have a satisfactory answer. On top of that, the dimension is useless to me in designing responses to problems. Secondly, the assessment that something is merely a matter of execution is a highly subjective one. Humans routinely underestimate the complexity of systems or domains they are not experts in, and so are quick to push the requirements for a strategy to work onto others as ‘execution’ and therefore ‘tactical’. This well-known video about the agile culture at Spotify couldn’t depict it better. My version for the top right panel, posited as ideal, would read “If it’s your execution problem, then I don’t have to think too hard about how my strategy could fail”.

Still from a video about autonomy and alignment at Spotify.

“So let’s hear your definitions then!”. Ok, well, for me it all hinges on whether we are talking about a specific situation or not, a situation being actual people in an actual place with an actual problem. A strategy is primarily a hypothesis (or a set of them) about how a sequence of actions, be they abstractly defined or not, will lead to solving or sidestepping a concrete challenge, given a particular model of the world. Tactics, on the other hand, are like playbooks that are separated from the specific situation. For example, a pincer movement is a tactic. You could choose to employ it in a concrete situation and so make it part of your strategy formulation. Key here is that in that choice you commit to a belief that the application of the tactic will yield some desirable outcome. In other words, tactics are patterns extracted from past successful strategies where the specifics of the situations have been abstracted away. Tactics become patterns of thought, like tools in a toolbox; they are best-practice heuristics that can be taught.

“Great, now we can no longer distinguish between strategy and… uhm… the other thing!”. Yes, and that’s the point. Strategies being hypotheses are beliefs about the response of an environment to a particular intervention, with some degree of certainty. The environment itself being wholistic and complex cannot be fully described or predicted, and so humans, with their limited cognition, tackle this by hierarchical abstraction. We have a top-level abstract strategy, and then we zoom in on a part, and recursively apply the same principle, all the way until we reach a level of specificity that affords concrete action in the world. This is a process of recursive strategy design and the product is a strategy tree. If at any point in that design process, you cannot find approaches that inspire confidence one level down, you should backtrack and take a different approach one level up. At least, that’s the case if you’re not content setting top-level goals and dumping them on the next person down.

Now to come back to tactics. In the process of designing a strategy like this you can employ tactics as cognitive shortcuts, as it short circuits the need to think everything through from scratch, much like chunking in chess.

In its most basic form a strategy is a big and widely branching cascade of hypotheses. It’s turtles all the way down. Even at the minutest level of action there are hypotheses and therefore uncertainty at play. For example, programmers try out unfamiliar functions while predicting their behaviour, and graphic designers try out Photoshop filters predicting to get back a particular visual effect, or see these descriptions of Blender usage (4.2.6, p. 75). These actions are still part of a strategy tree, with a root hypothesis such as “increase ARR by redesigning our low-touch funnel“. When we consider that even at this lowest level humans are dealing with hypotheses of how the environment will respond to our actions it becomes clear that strategy should be considered not as just the top-level objectives, but as the whole chain of hypotheses that reaches concrete action. In fact, strategy doesn’t actually exist out there in the real world; it’s just a story we tell ourselves (or distributed mental state if you will) to generate concrete and coherent action. Seen like this, it becomes hard to hold the notion that there is a dividing line between strategy and execution at all, and therefore that the typical concept of ‘tactics’ is useful.

Now consider the entropy (i.e. as a measure of uncertainty or expected prediction error) that’s present in any of the tips of the strategy tree. This is necessarily the lower bound that must also be inherent at the root of the strategy tree. Now, combine all the branches and the uncertainty adds up fast! A failure in any part of the branch could prove the entire strategy moot.

Yet somehow, when we hand over a branch of the strategy tree to another human being, and we call it execution, we forget this mechanistic truth and end up with a very different and wrong subjective experience. That is, the subjective belief of the probability of success is out of whack with the more objective view (or more precisely intersubjective view) that would result from the participants having the same opinions, domain knowledge, world view, and so on. As a result we are unable to properly judge the strategy or compare it to its alternatives. This allows for a strategy failure mode which could be best described as ‘head in the sand’, or as it is known at Spotify as ‘figure out how’.

It is my sincere belief that the only way to collaboratively design strategy is through the assistance of software and AI. We are simply not naturally equipped, not in our natural language nor cognitively more generally, to structure this type of distributed decision making, nor to adapt a strategy as it meets reality.

In the next post I’ll return to the ideas for how AI can help in strategy, and how it cannot. I’ll show how to make more precise the types of branching and concretion I alluded to, and how that relates to formally measuring uncertainty in a given strategy as a way for driving human and AI attention in the strategy design process.


Posted

by