Can AI handle the work you’re paying knowledge workers to do?
Sometimes. When the task is pure pattern work – drafting emails, summarizing documents, spotting data anomalies – AI tools like ChatGPT, Claude, Gemini, and Microsoft Copilot are faster than any human.
But they collapse the moment work requires context they can’t see, judgment about relationships they don’t understand, or decisions where “technically correct” tanks your client relationship.
Here’s what executives are learning: the line between “AI makes this faster” and “AI just created a mess I need to clean up” is thinner than vendors admit.
And 57% of your employees are already using AI without telling you.
That’s a governance crisis, not an adoption problem.
When AI fails (and what this costs)

This is by no means an anti-AI rant. I’m a fan, and I see the value in it when it is used properly. However, I also feel that Mark Zukerberg’s maxim of “move fast and break things” is great until the thing it breaks is your company’s reputation and ability to operate.
Let’s start with some real-world examples of how AI leads to catastrophe if badly implemented.
In the US:
- A New York lawyer used ChatGPT to research legal precedents and presented the courts with six fake court cases – wrong names, case numbers, citations. The judge sanctioned him. His reputation? Damaged. His opponents? Laughing.
- Air Canada’s chatbot lied to a customer about their refund and contradicted the airline’s actual policy. A tribunal ruled that it was responsible for all the information on its website, including AI mistakes. Air Canada paid in full plus legal costs.
- A Chevrolet dealership chatbot agreed to sell a new Tahoe for one dollar as a “legally binding offer.” Weak safeguards and no human oversight led to a predictable disaster.
- UnitedHealth deployed AI to determine nursing home coverage for elderly patients. When patients or doctors appealed, 90% of the AI’s denials were overturned because the system couldn’t account for individual patient context that medical professionals saw immediately.
- McDonald’s ended its three-year partnership with IBM for AI-powered drive-thru ordering because the system couldn’t handle how people actually speak or cope with background noise and unexpected requests.
- Zillow’s AI misjudged home prices so badly the company recorded a $304 million loss and laid off 25% of staff. The algorithm couldn’t factor in real-world constraints and market dynamics that human analysts would catch immediately. That’s not an AI problem. That’s a “we trusted AI to do work it can’t do” problem.
KPMG research on UK workers shows 54% made mistakes in their work due to AI, 58% relied on AI output without checking accuracy, and 38% admitted using AI inappropriately – uploading copyrighted information, client data, sensitive material.
That’s not edge cases. That’s over half your workforce gambling with your data.
UK government AI systems have caused widespread glitches in eligibility programmes through costly delays and systemic failures. The consultancies implementing AI-powered systems for government programmes seem to have created more problems than they solved, while charging millions for the privilege.
Globally, MIT research shows 95% of enterprise AI pilots fail to create measurable value.
That’s not “early adopter problems.” That’s executives who didn’t do their homework. Over 80% of AI implementations fail within six months. In the UK specifically, roughly 80% of AI projects fail primarily due to skills gaps, poor infrastructure, or unclear ROI calculations.
Ninety-one percent of UK business leaders report that poor data quality negatively affects their operations and limits AI effectiveness. You can’t build reliable AI on unreliable data. And businesses have lost big contracts because AI-generated client communications were tone-deaf enough to damage trust.
You can’t enter that cost in a spreadsheet, but you’ll feel it in future revenue.
The Skills Your Executives Need to Ensure This Doesn’t Happen to You
You don’t need everyone to become prompt engineers. You need people who can brief AI like they’d brief a very fast analyst with no common sense and zero ability to ask clarifying questions.
Most executives are terrible at this.
They treat AI like Google – vague question, accept first answer. When the answer is wrong and gets embedded into client work, you end up burning money fixing problems AI created and then burning more apologising to clients. Then even more replacing the clients who left.
Here’s what matters: critical thinking. When AI generates plausible-sounding answers, your job is to ask – is this actually right? Does this answer the real question or the one I accidentally asked? What assumptions is this built on? Where does this break in practice?
Half of UK business leaders cite limited expertise at management and board level as their biggest concern. 50% lack trust in AI outcomes. 40% worry about security risks and employee skills gaps.
You know what that tells me?
Half your leadership doesn’t understand the tool they’re deploying across the organisation. That’s like giving everyone in your company access to the company bank account and hoping they’ll be sensible with it.
Technical understanding matters. Not coding. But understanding what AI does: pattern matching, not reasoning. If you treat AI as if it thinks, you’ll trust it when it’s guessing and hallucinating. That can be an expensive mistake, the kind that shows up in your P&L and makes your CFO uncomfortable.
Client relationships become more valuable. AI drafts emails. It doesn’t build trust that makes clients call you first when problems surface. It doesn’t read subtext. It doesn’t repair relationships after projects go wrong. Businesses can lose long-term client relationships when they let AI handle “routine” communication that wasn’t actually routine.
As an individual, if your entire value add is from producing deliverables faster, AI makes you less valuable. However, if your value derives from judgment, relationships, and owning outcomes – you become more valuable than ever.
People who combine AI speed with human judgment will replace people who have neither the AI skills nor the judgment to compete. That dynamic is already playing out everywhere, whether you like it or not.
The AI Slop Problem (Your Brand Is Already At Risk)
Remember Spotify Wrapped? Turned into “AI-generated slop” in 2024 with made-up genres like “Pink Pilates Princess Strut Pop.”
Coca-Cola’s AI Christmas ad got branded “soulless.”
McDonald’s just (10 December 2025) pulled their AI Christmas ad after massive backlash. Multimillion-pound campaigns destroyed by customer backlash because someone thought AI could replace human creativity.
The pattern? Brands outsourced creativity to algorithms. Customers noticed. And they hated it.
When your marketing, client communications, and brand voice gets filtered through AI without human judgment, people can tell.
“AI slop” is what happens when businesses confuse speed with quality. Over half of long LinkedIn posts are now said to be AI-generated, according to industry analysis. Most are generic, repetitive, and instantly forgettable. Your competitors are flooding inboxes with this stuff right now.
That’s your opportunity – if you can show the difference between AI-assisted work and AI-generated waste.
The AI-to-AI Email Problem Nobody’s Talking About

Here’s the scenario that should terrify you.
Your AI writes an email. My AI reads it and responds. Your AI reads my AI’s response and replies.
At what point are humans involved?
Research shows this creates catastrophic problems in business communications. When emails are 95% AI-generated, recipients sense something’s off. They doubt sincerity. They reinterpret past messages sceptically. AI-optimised “empathy” feels engineered and triggers defensiveness rather than connection.
The sender appears lazy or insincere. Informal influence evaporates. When leaders delegate emotional labour to algorithms, their guidance faces constant questioning.
Worst case? Someone accidentally copy-pastes the AI prompt into the email. It’s happened to lawyers, academics, and executives. You’ve exposed staged empathy, visible tone-tuning, proof you outsourced the relationship work.
Your clients aren’t stupid.
Where AI Actually Works
No onto the good stuff – what is AI brilliant at? It’s not complicated: AI does patterns.
That’s it.
Drafting from notes when you’ve got a 20-minute recording and rough client call notes – AI structures them into something readable, you verify and ship. Legitimate time saved. Teams cut meeting follow-up time by 80%..
Repetitive communications next. Standard responses, status updates, RFP boilerplates your team hate writing. AI handles first drafts faster than junior staff. You edit for tone and accuracy. The blank-page problem disappears. Done.
Data pattern detection is where it gets interesting – customer complaints clustering around the same issue, login anomalies across 500 accounts, budget variances that don’t add up. AI spots these in seconds. You decide what they mean and what to do about them. AI flags the pattern, humans check the numbers, problem solved.
Document summarisation. Contracts, technical specs, industry reports. AI pulls key points without getting bored halfway through page 73. You verify against source material (you have to – never, ever skip this), but the extraction work is done. Format conversion – bullet points to prose, tables to narrative, one structure to another. AI handles mechanical transformation leaving you free to focus on whether content actually makes sense.
Everything else? You’re gambling with money you can’t afford to lose.
Which AI Model Does What
The various AI models are good at different things. They are not really substitutable if you want the best outcome in the least amount of time.
- Claude: writing where nuance matters – anything client-facing where tone could kill the deal
- ChatGPT: speed – routine queries, getting unstuck, first drafts
- Gemini: maths – if numbers matter, use this
- DALL-E 3 and Midjourney: professional images
This is nowhere near an exhaustive list and people disagree but the point is that if you use the right tool you’ll do a better job. Some 45% of UK SMEs now use AI, up from 25% in 2022, and most are using the wrong tool for the job because they simply defaulted to whatever they had heard of. That costs money every single day.
Executive Decision Framework: Three Categories (But Life’s Messier Than This)

Stop making this complicated. There are three categories.
Where AI should be mandatory
- Routine drafting
- Data summarization
- Format conversion
- Repetitive analysis.
If your team is still doing this manually, they’re wasting their time and your money. Train them. Measure adoption. Make it non-negotiable for pattern work. This should be 30-40% of knowledge work in most businesses.
Where AI should be absolutely forbidden
- Client-sensitive data that can’t leave your systems
- Compliance decisions with regulatory implications
- Anything involving personal data under UK GDPR
- Contract negotiations where AI might leak proprietary information
- Security-critical decisions
These aren’t about AI capability. They’re about risk and those data protection requirements you can’t compromise.
Where a named human must take final responsibility
- Anything customer-facing
- Anything contractual
- Strategic recommendations
- Budget decisions
- Hiring, firing, performance reviews.
It’s OK if AI drafts as long as a specific person reviews, edits, and owns the outcome.
Not “the team.” Not “we used AI.” A named individual who understands the context, knows the relationships, can defend every word if challenged.
If you can’t name who’s responsible, don’t send it.
But here’s where it can get messy – some stuff sits on the boundary. Client emails that are routine but mention sensitive projects, analysis that’s data-driven but has strategic implications, project updates that are factual but can land you in the middle of tense stakeholder politics.
Your team may argue about where things belong and that’s fine. Arguing means you’re thinking about it. Teams that spend 20 minutes debating whether a specific email should be AI-drafted or human-written often avoid six-figure mistakes.
Learn fast. Adjust the categories. The framework isn’t perfect but it’s better than chaos.
What will change over the next 3-5 years
AI will get faster, cheaper, and more integrated into every tool you use. That’s inevitable.
But here’s what nobody’s talking about: we still don’t know where the capability ceiling sits. Current models are brilliant at patterns but weak at reasoning. Next generation AI could close that gap. Or it may just make the same mistakes faster and more enthusiastically. Betting your business strategy on “AI will definitely get better at X” could be expensive if you’re wrong.
Mid-level knowledge work applying rules to standard situations will get automated. As will the vast majority of first drafts, Document review, data extraction, and routine analysis will need less human time applied. This is certain and you should plan for it.
What will becomes more valuable?
- Problem framing. If you can’t brief AI clearly, its output is useless. That skill becomes as foundational as typing. Most people are terrible at it right now. The gap between “people who can get good output from AI” and “people who just paste vague questions” will define who stays valuable.
- Exception handling. AI can handle 80% of routine cases while you handle the 20% that doesn’t fit patterns. Your value won’t come from high-volume routine work anymore – it’ll come from solving weird cases, navigating grey areas, and making judgment calls when the data doesn’t fit the model. That 20% is where all the margin lives.
- Accountability. When AI drafts the contract and you approve it, you own the outcome and you can’t blame the tool or say “but the AI said…” This makes expertise more valuable, not less. The person who catches AI mistakes before they cause problems will be who businesses will pay serious money for.
The gap between people who use AI as a power tool and those who just forward AI outputs without thinking will define your competitive advantage; not access to AI, not budget for fancy tools. Just the ability to use it well and know when not to use it at all.
Why This Matters Right Now
Businesses treating AI as autopilot are hitting problems, including
- Professional-sounding outputs that miss the point
- Client relationships damaged by tone-deaf communications
- Security risks from data being uploaded without thinking
- Productivity tools that somehow made work slower because nobody scrutinized the output
Businesses treating AI as a power tool – faster at routine work, useless for judgment – are pulling ahead.
Less time on repetition, more time on expertise. They ship faster without sacrificing quality because they know exactly where AI adds value and where it destroys it. Some businesses have cut proposal writing time by 60% while maintaining client conversion rates because humans still handle the relationship-critical parts.
The competitive advantage isn’t having access to AI. Everyone has that.
It’s having people who know how to use it well, when not to use it at all, and how to catch mistakes before they reach clients. That’s not a technology problem. That’s judgment and training. And most UK businesses are terrible at both.
To get there you need AI governance that works. Not consultants who’ll charge £50k for a 100-page framework nobody reads. Not vendors who want to sell you £200k worth of tools you don’t need. Not “best practice” templates designed for American corporations with 50,000 employees.
We work with businesses to define practical policies that balance productivity with security and compliance. We don’t sell AI tools. We’ll help you use the ones you’ve got without creating security gaps or regulatory problems. Talk to our team if you want help that doesn’t waste time.
FAQs: AI in Business
1. What tasks is AI actually good at in business?
Pattern-based work. Drafting first versions, summarising documents, restructuring notes, spotting data trends, automating repetitive tasks. 45% of UK SMEs have integrated AI solutions, up from 25% in 2022. Use it for the boring stuff your team hates doing anyway.
2. Where does AI reliably fail in business contexts?
Context, judgment, and anything where relationships matter. Air Canada’s chatbot gave incorrect refund information and the airline paid. UnitedHealth’s AI had 90% of claim denials overturned on appeal. KPMG research shows 54% of UK workers made mistakes due to AI, and 58% relied on AI output without checking accuracy. If the work requires reading the room, understanding subtext, or navigating politics – AI will fail and make you look bad in the process.
3. What is “AI slop” and why should executives care?
Low-quality, generic AI-generated content that damages your brand. Research shows 54% negative sentiment toward AI-generated content in October 2025. Spotify, Coca-Cola, and McDonald’s faced backlash for AI-generated marketing that turned loyal fans into vocal critics. Your competitors are flooding inboxes with this garbage right now – that’s your opportunity if you can show the difference.
4. What are the risks of AI-generated business emails?
Recipients sense something’s off, doubt your sincerity, and perceive AI-optimised “empathy” as manipulation. Your informal influence evaporates. Worst case? Lost contracts worth six figures because AI-generated communications damaged trust enough that clients simply stopped returning calls.
5. How should UK businesses govern AI tool usage?
Three categories: mandatory (routine drafting), forbidden (client-sensitive data, compliance decisions), named human responsibility (customer-facing work). Given that 80% of UK AI projects fail and 57% of workers hide AI usage from management, clear governance isn’t optional. You’ll argue about the grey areas. Good – that means you’re thinking about it.



You must be logged in to post a comment.