Why AI Gives Every Business the Same Strategy Advice

Why AI Gives Every Business the Same Strategy Advice

Harvard researchers tested 6 leading AI models across 15,000 strategic simulations. Every model gave the same trendy, buzzword-heavy advice, regardless of context. Here is why that matters for your business.

Published on 6 April 2026

13 min read
Marketing StrategyBusiness GrowthDigital Marketing

Introduction

Five months ago, I asked six AI models the same strategy question. I was building a new service and wanted help thinking through positioning. Each model, regardless of how much context I provided, gave me some version of the same answer: "Differentiate by emphasising your unique blend of AI-powered research and human strategic insight."

It sounds great on a slide. It means nothing in a sales conversation.

When I ran the same question through multiple models and got essentially the same answer from each, different words but the same substance, it felt like asking five different people for restaurant recommendations and they all say "somewhere with good food."

I thought I was being pedantic. Then Harvard proved I was right.

Researchers at Harvard Business School tested what happens when you ask the biggest LLMs, GPT-5, Claude, Gemini, Grok, DeepSeek, and Mistral, for strategic advice at scale. They ran over 15,000 simulations across seven core business tensions. The result confirmed what I had been feeling: a near-uniform bias toward the "trendy" side of every dilemma, regardless of context. They called this "trendslop" (Romasanta, Thomas & Levina, HBR, March 2026).

If you have ever used ChatGPT or Claude to help with a positioning question, a pricing decision, or a growth plan, this article is worth your time. The implications for founder-led businesses are significant, and the solutions are more practical than you might expect.

The seven tensions every founder faces

Before we get into what is wrong, it is worth understanding what the researchers actually tested. These are not abstract academic categories. They are the real strategic trade-offs that founders and business owners navigate constantly.

Table showing seven strategic tensions tested by Harvard researchers across leading AI models

Differentiation vs. Commoditisation

Do you invest in a unique value proposition that commands premium pricing, or do you compete on cost and efficiency?

Every accountancy firm, every SaaS product, every retrofit installer faces this question. Porter built an entire framework around it. The LLMs overwhelmingly chose differentiation, even in scenarios where cost leadership has built some of the world's most successful companies: Walmart, Costco, Aldi, Ryanair. If you are working through why your value proposition feels stuck, this bias is worth knowing about.

Exploration vs. Exploitation

Do you pour capital into discovering new markets and breakthrough products, or do you maximise returns from what is already working?

For a founder running a 30-person business, this is the "do I chase the shiny new opportunity or double down on what pays the bills" question. The AI consistently favoured exploration, the exciting, venture-capital-flavoured answer, over the often more profitable choice of getting better at what you already do well.

Short-term vs. Long-term Performance

Do you secure immediate revenue to keep the lights on, or invest in multi-year strategic plays?

Every owner and board member knows this tension intimately. You cannot eat a five-year plan. But AI overwhelmingly recommended long-term thinking, which is easy advice when you do not have payroll to meet on Friday.

Competition vs. Collaboration

Do you fight for market share or partner with others to grow the overall pie?

The models favoured collaboration almost universally. This sounds enlightened, but sometimes the right answer is to compete fiercely and win, particularly when your competitors are actively stealing market share.

Radical Innovation vs. Incremental Innovation

Do you bet the future of your business on disruptive change, or on steady, incremental improvements?

AI loves radical innovation, the headline-grabbing kind. In practice, most successful businesses I work with grow through disciplined incremental improvements: a better onboarding flow, a tighter proposal process, a clearer pricing page. Not sexy, but effective.

Centralisation vs. Decentralisation

Do you keep decision-making tight at the centre or push authority to the edges?

For a founder-led business, this is really about how much you are willing to let go. AI favoured decentralisation, but if you are a 30-person firm where the founder holds the entire strategic narrative in their head, decentralising before you have codified that narrative is a recipe for failure.

Automation vs. Augmentation

Do you replace human work with technology or use technology to make your people more effective?

The models overwhelmingly chose augmentation, the human-friendly answer. But there are situations where full automation is clearly the right call. Paying someone to manually update a spreadsheet that a script could handle in seconds is not augmentation. It is waste. Understanding how to think about AI adoption properly matters more than following the trendy answer.

In every single case, the LLMs favoured whichever side sounded more appealing in a Harvard Business Review article. This is not a coincidence. It is the training data biasing the model's output, and it occurred across all frontier models.

Why better prompting does not fix this

AI companies invest heavily in getting us to prompt better. But these biases are far more deeply ingrained than a prompt can reach.

In over 15,000 trials, where the researchers tweaked wording, word order, and added rewards to encourage deeper thinking, it only shifted the bias by 2% on the strongest biases. Even adding rich, industry-specific context, the kind of detailed brief a good consultant would want, only shifted the bias by about 11%.

The biggest shift the researchers found? Simply flipping the order of the options moved things by roughly 19%. Not asking for deeper reasoning. Not better context. Just changing which option the model reads first. That tells you something important about how these recommendations are generated.

This means even if you write a beautifully detailed prompt with your company's revenue figures, competitive landscape, and customer data, the AI will still sound convincingly tailored to your situation while quietly steering you toward the same cluster of trendy business buzzwords from its training data.

Chart illustrating how prompt engineering barely shifts AI strategic bias

The convergence risk for your business

Here is the logical conclusion: if every boardroom in your sector is asking the same models the same questions, you are all converging on the same strategy. The tool that was supposed to give you a strategic edge is quietly herding you toward identical positioning.

If you follow that advice, you become the average-weighted company in strategic terms. That cannot be good for business.

There is also a Jevons' Paradox at work here. As AI makes strategy tools cheaper and more accessible, more businesses enter the market using the same tools, which actually raises the bar for differentiation. The easier it gets to generate a strategy, the less valuable the AI-generated strategy becomes.

I wrote about a related problem in a recent article on how marketing is becoming a mere facsimile. The same dynamic applies: when everyone uses the same tool to produce the same output, the output becomes wallpaper.

AI hates closing doors

When the researchers gave the LLMs the option to choose "both", to differentiate and pursue cost leadership, radical and incremental innovation, the results were telling. 63% of the time they chose the trendy side outright, 24% chose the hybrid "do both" answer, and only 12% chose the less fashionable option.

Strategy scholars call "do both" the stuck-in-the-middle trap. It sounds sophisticated, but it is the fastest path to mediocrity.

AI has an inherent bias towards additive answers. "Do more of this." "Yes, add this channel." "Expand into this segment." It rarely advocates for "stop doing that thing that is not working but feels safe."

The reason is that AI echoes the internet, which is full of articles about growth, expansion, and innovation. Almost nobody writes about the strategic power of subtraction and constraints. The only exception I have found is Alex H Smith, whose book No Bullshit Strategy makes the case beautifully.

Porter said it decades ago: you choose cost leadership or differentiation because they require different capabilities, different cultures, different ways of operating. Walmart and Costco built empires on cost leadership. Every LLM in that study would have told them to differentiate.

In the real world, strategy is fundamentally about what you are going to say no to. Being open to every strategy means your business will be ineffective at everything, because there is no clarity. If you are building a marketing strategy that goes beyond random tactics, understanding this distinction is essential.

Why founders are most at risk

The difference is that a McKinsey partner has a full strategy team to push back on biased AI output. A founder running a 30-person business does not have that luxury.

Consider a professional services firm, an accountancy practice with 25 to 40 staff, competing in a market where every firm has the same credentials and the same pitch. The founder asks ChatGPT how to stand out. The model will likely say: "Differentiate through specialisation and thought leadership content."

Meanwhile, the actual problem is that their juniors are discounting to win work because nobody in the firm can articulate why they are worth more than the practice down the road. "Differentiate" is not a strategy. It is a word. The real question is: has anyone in the firm been given the language to hold a price in a meeting? That is a messaging and confidence problem, the kind of thing AI does not even see.

Or consider a SaaS company with 40% churn at 90 days. Sales blame the product, and the product team blames sales. The founder asks AI for help and gets the average category answer: "Improve onboarding and communicate value earlier." But if nobody, not the founder, not the sales team, not the marketing site, can explain why the product is mission-critical rather than nice-to-have, no amount of onboarding optimisation will fix the churn. That is a positioning problem in disguise.

Same pattern in retrofit or green energy. A technical founder whose leads ghost after the quote asks AI how to shorten the sales cycle. It says "simplify your value proposition." But the real barrier is buyer scepticism. The market has been burned by companies that over-promised and under-delivered. They need to rebuild trust in the category before they can sell their own solution. AI cannot see the conversations that are not happening.

In most cases, the AI will give you the trending category answer. Strategy needs your context, and the gap between those two is where businesses get stuck.

Diagram showing the gap between generic AI strategy advice and context-specific business needs

What AI is actually good for in strategy

I am not anti-AI. In the past year, I have logged over 50,000 pounds in equivalent value across AI work sessions. I use it for research, data synthesis, drafting, and pattern-spotting every day. But being honest about limitations is what makes the tool useful rather than dangerous.

As I said on The Technology Show on Voice FM recently: "AI does not replace strategy, it accelerates it. The quality of what comes out depends entirely on the context you put in: your brand story, your customer data, and your strategic goals. That is where strategy and strategist still matter."

This technology is brilliant at the 80%. Pulling together market reports in minutes. Brainstorming alternatives you would never have considered. Surfacing trends in raw data that a human analyst could miss. And sometimes stating the obvious, which is exactly what you need to hear.

Sequoia framed this well recently:

Intelligence work - rule-based or data-driven work that should belong to AI.

Judgement work - work requiring experience, taste, and instinct built on years of practice, which should belong to humans.

Strategy-as-a-function firmly sits in the judgement camp. Deciding what to build next, what projects to kill off, how to position, and who to serve requires a skill set that no amount of training data can replicate. Founders are best placed for this, but outside help can be useful for gaining different perspectives and greater experience with strategic tools that are actually context-aware.

The greater risk is to founders who hand over control entirely or who do not understand these inherent biases. Over time, this breeds leaders who have not spent the years refining their judgement, leading to weaker strategy today and in the future.

The context problem AI cannot solve

One thing I have learned building my own AI workflow is that context is everything, and context is fragmented. Your brand story lives in one place, your customer data in another, your strategic goals in a third, your sales conversations in a fourth.

I counted 16 separate "context islands" across a typical founder-led business recently, and no single AI prompt can hold all of that. The trendslop research confirms this: even rich, industry-specific context only shifts AI recommendations by 11%.

The gap between what the AI knows about your business and what you know is enormous. That gap is where strategy actually lives.

The AI does the heavy lifting on data. Humans should do the heavy lifting on trade-offs. That division is not a compromise. It is the whole point: strategy grounded in evidence but shaped by judgement.

What to do on Monday

Practical steps for founders to pressure-test AI strategy advice

Next time you are about to ask ChatGPT a strategy question, try this instead. Ask it to make the strongest possible case against whatever you are currently planning.

"Argue why this positioning will fail."

"Make the case for the opposite approach."

Alternate between positive and negative hypotheses. First, ask your model to argue for your current strategy, then against it. Finally, for the approach you have already rejected. Force the model to inhabit positions it would not naturally choose.

As you do this, you will see where it generates sharp, specific reasoning (useful) and where it hedges with "there are pros and cons" non-answers (trendslop). If every counter-argument it produces is vague and generic, you have probably been fed the internet's most popular answer, and no amount of prompting will fix it.

That is the point when you need a human who knows your business, someone who can sit with the discomfort and make the hard call with you.

Your AI has a strategy, but it is rarely yours

The problem is not that AI gives bad advice. It is that it gives everyone the same advice, dressed up as tailored insight. The business leaders who win will not be the ones who stop using AI. That ship has sailed. They will be the ones who learn where the tool is brilliant and where it quietly steers you off a cliff.

If your current growth plan started in a chat window, it might be worth pressure-testing. The same caution applies as models get more capable; the frontier model step change is about capability outrunning the scaffolding most businesses have in place, and that includes the strategy decisions you've already acted on.

That is what my Discovery session is designed for: mapping what is actually driving growth, what is noise, and which of those seven tensions your business needs to confront right now.

It is the same approach that sits at the heart of Value by Design: AI-accelerated research, human-owned decisions. Because the real strategic advantage is not having access to AI, everyone has that now. It is knowing what to do with the hard questions AI cannot answer.

Found this helpful?

Explore more insights and strategies to elevate your marketing approach.