Marketing Strategy in the Age of Agents (Part 2): Why Alchemy Matters More
Issue #35: Holding the Gold Line
Introduction
In Part 1, I argued that marketing isn’t a first-order system that can be optimised into predictability. It’s a second-order chaos system - shaped by culture, irrationality, and surprise. Push optimisation too far and you don’t just reduce impact, you poison the pool for everyone (“pee in the pool”).
This issue is about how those same principles are applied to strategy as a discipline, and how I’ve experienced them over 15 years in Adland strategy.
From Silos to Sparks
I’ve never really believed in pure specialisms in marketing. Since 2010 I’ve worked across brand, content, data, customer experience, and media - not as silos, but as intersections.
The most valuable insights don’t sit neatly in one discipline. They appear at the edges - where analytical thinking collides with creative leaps, where brand meets media, where data collides with culture. That’s where strategy becomes alchemy.
AI as Sandbox, Not Shortcut
AI is now part of that mix. But I don’t see it as a shortcut. I see it as a sandbox: a space to stretch, remix, and test ideas. Done right, it doesn’t just deliver faster answers - it surfaces new territories to explore.
The danger, as with marketing in general, is mistaking efficiency for progress. When that happens, you just end up accelerating the same “pee in the pool” contaminated output.
Heavy Processes, Lean Principles
I’ve always respected the strategy process, but resisted the weight of it. Some strategy frameworks are brilliant; others collapse under their own complexity. One of the most influential books in my career was The Lean Brand. It brought startup principles into brand building at exactly the moment I needed reassurance about things I was feeling and seeing in the industry (back in 2014).
When I interviewed Jeremiah Gardner on my podcast (in 2025), he spoke about the “gold line” of AI strategy. Cave divers use a continuous line back to the surface; no matter how deep they explore, they can always find their way back. In strategy, that line is empathy and evidence - anchoring everything back to the customer. AI can accelerate exploration, but without that line, you get lost.
👉 Check out the full conversation here: Into the Rabbit Hole with Jeremiah Gardner.
Agentic Workflows Don’t Change the Fundamentals
Agent-based workflows won’t remove the need for strategy. If anything, they amplify the importance of it. Humans still need to sit at the centre, holding the gold line, mixing the ingredients.
Automation can scale answers. But only alchemy can create ideas that feel alive. There must be humans who can manage the agentic systems, build from them, leverage them to explore deeper questions and answers that are greater than the sum of their parts.
WPP Open
At WPP, we’ve been building WPP Open, a platform that allows us to design multi-modal, multi-step agents for clients. The tech is powerful, but here’s the critical point: you can’t just build for the maximum capabilities of the machine.
When creating an agent, you have to map the full workflow across agency and client. Where does human skill add irreplaceable value? Which tasks can responsibly be handed to the machine? And what impact does shifting those tasks have on collaboration, creativity, and outcomes?
Equally important is how you feed information into the machine. You can’t just drop a 40-page deck in randomly and expect useful results. Information needs to be simplified, structured (for example, into a clean CSV), and made readable as part of the agentic steps - and also manageable for the owner of the agent to update and tinker with over time. Only then can the agent perform in a way that raises the performance of the user.
I see this process as similar to the construction of a creative brief in the traditional strategy sense. You don’t just dump all the background research into a brief; you simplify it down to the most useful springboard for creative minds. Building agentic systems requires the same discipline: distilling complexity into usable architecture, while also understanding the broader remit of the environment around it.
It’s not about pushing every process to its automated extreme - building a Black Box. A black box system may deliver outputs, but it hides the logic and makes it impossible to adapt or improve. What we need is the opposite: Explainable AI. Explainable AI means the agent is transparent in how it works, so humans can understand, tinker, and refine it over time.
That’s the difference between building a Black Box and building a Renaissance System. The Renaissance approach scales learning, keeps the full picture in view, and ensures that human imagination is amplified rather than automated away. It means making AI explainable, so strategists and clients alike can keep building human value into the workflow.
Stephan Pretorius, WPP’s CTO says it best:
Look at What’s Optimal And What’s Not
This is where the opaque vs. transparent question comes in.
Some industries are built on opacity - where value exchanges weren’t always exactly clear in terms of what you were getting for your money. When AI makes those systems fully transparent, the impact can be brutal. Whole categories can collapse when hidden value, or the lack of it, is suddenly exposed.
Marketing is different. As I argued in Part 1, we’ve already leaned and optimised processes over and over again. Strip it back too far and the output simply becomes valueless. Marketing requires chaos - because it’s a level 2 system.
In first-order systems (like logistics or supply chains - check out WPP Satalia), AI is superior in every way. In second-order systems like marketing, chaos isn’t a bug; it’s the fuel. Marketing has always leaned into the newest tech, some would argue too much in many ways. That pressure to experiment, though, is what keeps it alive.
But here’s the risk: if we replace opaque systems with Black Box AI - tools that look transparent on the surface but hide their logic and processes, we don’t actually solve anything. We just reinvent the same problems of opacity with new-age tech. That doesn’t help clients, and it doesn’t create value.
Strategic Principles to Hold On To
The same principles from Part 1 apply when developing agentic workflows:
Skip ahead, lose the lesson. Clients who jump straight to enterprise AI may gain speed, but they miss the muscle memory of learning where humans add value. Differentiation comes from the balance, not the bypass.
Attention is chaos, not code. Marketing cannot be fully controlled. AI can optimise, but it cannot guarantee what will spark attention. Surprise and irrationality are not bugs - they are features.
Perfectly efficient. Perfectly ignorable. When every brand runs on the same agent systems, efficiency turns into commoditisation. The last frontier is imagination: the human capacity to inject nuance, surprise, and resonance into a world of predictable machines.
Why Alchemy Matters More
The last frontier in strategy isn’t automation. It’s imagination - our ability to combine disciplines, tools, and insights into something that’s more than the sum of its parts.
In a world where every brand will soon have access to the same AI agents, the difference won’t be who automates fastest. It will be who mixes best.
Because strategy has never been about choosing between creativity and rigour, brand and performance, humans and machines, chaos and order. It has always been about making alchemy out of all of them.
🕳️
These are truly inspiring times, and the rabbit hole goes much deeper…
🐰
This article was written in collaboration with AI (GPT-5). As part of an ongoing experiment to co-create with AI and live the improvements in real-time.


