• Jump to contents
  • Jump to main navigation
  • Jump to site map
  • News
  • Insight
  • Careers
  • Support
  • Book a Meeting
  • Contact Us Now
  • Book a Meeting
  • Contact Us Now
  • +44 207 837 2444
  • US and International: +1 323 984 8908
  • Change Region
  • +1 323 984 8908
  • Change Region

Cardonet IT Support for Business

Cardonet are a consultative business partner who will work closely with you to provide a transparent, vendor-neutral approach to your IT Services.

+44 203 034 2244
7 Stean Street, London, E8 4ED

+1 323 984 8908
750 N. San Vicente Blvd, Los Angeles, CA 90069

  • Home
  • IT Solutions
    • Industry Sector IT Solutions
      • Hospitality
        • Hotels
        • Hotel Management
        • Restaurants
        • Pub & Bars
      • Finance Associations
      • Manufacturing
      • Media and Creative
        • Marketing Agencies
        • Public Relations and Communications Agencies
        • Design Agencies
        • Advertising Agencies
        • Market Research Agencies
        • Entertainment
      • Charity
      • Education
    • Business IT Challenges
      • Remote and Hybrid Working
      • IT Outsourcing
      • IT Cost Optimisation
      • Office Move and IT Relocation
      • Global Technology Operations
      • Global IT Helpdesk
      • Cyber Security Journey
      • Technology Compliance
      • Multi-site IT Operations
      • GDPR Compliance
      • PCI DSS Compliance
  • IT Services
    • IT Support
      • 24x7 Service Desk
      • 24x7 Network Monitoring
      • IT Service Delivery
      • Proactive IT Support
      • Remote IT Support
      • Onsite IT Support
      • Out of Hours IT Support
      • Dedicated Service Desk
      • Network Support
      • Microsoft Support
      • Apple Mac Support
      • Business IT Support
    • IT Consultancy
      • IT Strategy
      • IT Projects
      • IT Audits
      • Software Licensing
      • IT Infrastructure
      • IT Procurement
      • IT Supplier Management
      • IT Security
      • IT Networks and Cabling
      • Cloud Readiness
      • Virtualisation
      • Backup and Continuity
    • Managed IT
      • Managed Networks
      • Managed Hosting
      • Managed Backups
      • Business Continuity
    • Managed Cloud
      • Private Cloud
      • Hybrid Cloud
      • Public Cloud
    • Communication
      • Onsite Telephone System
      • Hybrid Telephone System
      • Cloud Telephone System
      • Contact Centre
      • Video Conferencing
      • SIP Trunking
      • Lines and Calls
    • Cyber Security
      • Cyber Security Audit
      • Managed Cyber Security
      • Cyber Compliance
  • About
    • About Cardonet
      • Why Cardonet?
      • News
      • Insight
      • Management Team
      • Case Studies
      • Customers
      • Technology Partners
      • Accreditations & Memberships
      • Approach and Culture
      • History
    • Careers with Cardonet
      • Why Cardonet for your Career?
      • Meet our Team
      • Job Entry Options
      • Current Job Vacancies
  • Contact

News

Humans v AI: What AI Gets Right, What It Gets Catastrophically Wrong, and Why Humans Still Matter

by Sagi / Friday, 20 February 2026 / Published in Artificial Intelligence
Business Decision Framework AI Capabilities Limitations

Can AI handle the work you’re paying knowledge workers to do?

Sometimes. When the task is pure pattern work – drafting emails, summarizing documents, spotting data anomalies – AI tools like ChatGPT, Claude, Gemini, and Microsoft Copilot are faster than any human.

But they collapse the moment work requires context they can’t see, judgment about relationships they don’t understand, or decisions where “technically correct” tanks your client relationship.

Here’s what executives are learning: the line between “AI makes this faster” and “AI just created a mess I need to clean up” is thinner than vendors admit.

And 57% of your employees are already using AI without telling you.

That’s a governance crisis, not an adoption problem. 

When AI fails (and what this costs)

The High Cost of AI Failure

This is by no means an anti-AI rant. I’m a fan, and I see the value in it when it is used properly. However, I also feel that Mark Zukerberg’s maxim of “move fast and break things” is great until the thing it breaks is your company’s reputation and ability to operate.

Let’s start with some real-world examples of how AI leads to catastrophe if badly implemented.

In the US:

  • A New York lawyer used ChatGPT to research legal precedents and presented the courts with six fake court cases – wrong names, case numbers, citations. The judge sanctioned him. His reputation? Damaged. His opponents? Laughing.
  • Air Canada’s chatbot lied to a customer about their refund and contradicted the airline’s actual policy. A tribunal ruled that it was responsible for all the information on its website, including AI mistakes. Air Canada paid in full plus legal costs.
  • A Chevrolet dealership chatbot agreed to sell a new Tahoe for one dollar as a “legally binding offer.” Weak safeguards and no human oversight led to a predictable disaster.
  • UnitedHealth deployed AI to determine nursing home coverage for elderly patients. When patients or doctors appealed, 90% of the AI’s denials were overturned because the system couldn’t account for individual patient context that medical professionals saw immediately.
  • McDonald’s ended its three-year partnership with IBM for AI-powered drive-thru ordering because the system couldn’t handle how people actually speak or cope with background noise and unexpected requests.
  • Zillow’s AI misjudged home prices so badly the company recorded a $304 million loss and laid off 25% of staff. The algorithm couldn’t factor in real-world constraints and market dynamics that human analysts would catch immediately. That’s not an AI problem. That’s a “we trusted AI to do work it can’t do” problem.

KPMG research on UK workers shows 54% made mistakes in their work due to AI, 58% relied on AI output without checking accuracy, and 38% admitted using AI inappropriately – uploading copyrighted information, client data, sensitive material.

That’s not edge cases. That’s over half your workforce gambling with your data.

UK government AI systems have caused widespread glitches in eligibility programmes through costly delays and systemic failures. The consultancies implementing AI-powered systems for government programmes seem to have created more problems than they solved, while charging millions for the privilege.

Globally, MIT research shows 95% of enterprise AI pilots fail to create measurable value.

That’s not “early adopter problems.” That’s executives who didn’t do their homework. Over 80% of AI implementations fail within six months. In the UK specifically, roughly 80% of AI projects fail primarily due to skills gaps, poor infrastructure, or unclear ROI calculations.

Ninety-one percent of UK business leaders report that poor data quality negatively affects their operations and limits AI effectiveness. You can’t build reliable AI on unreliable data. And businesses have lost big contracts because AI-generated client communications were tone-deaf enough to damage trust. 

You can’t enter that cost in a spreadsheet, but you’ll feel it in future revenue.

The Skills Your Executives Need to Ensure This Doesn’t Happen to You

You don’t need everyone to become prompt engineers. You need people who can brief AI like they’d brief a very fast analyst with no common sense and zero ability to ask clarifying questions.

Most executives are terrible at this.

They treat AI like Google – vague question, accept first answer. When the answer is wrong and gets embedded into client work, you end up burning money fixing problems AI created and then burning more apologising to clients. Then even more replacing the clients who left.

Here’s what matters: critical thinking. When AI generates plausible-sounding answers, your job is to ask – is this actually right? Does this answer the real question or the one I accidentally asked? What assumptions is this built on? Where does this break in practice?

Half of UK business leaders cite limited expertise at management and board level as their biggest concern. 50% lack trust in AI outcomes. 40% worry about security risks and employee skills gaps.

You know what that tells me?

Half your leadership doesn’t understand the tool they’re deploying across the organisation. That’s like giving everyone in your company access to the company bank account and hoping they’ll be sensible with it.

Technical understanding matters. Not coding. But understanding what AI does: pattern matching, not reasoning. If you treat AI as if it thinks, you’ll trust it when it’s guessing and hallucinating. That can be an expensive mistake, the kind that shows up in your P&L and makes your CFO uncomfortable.

Client relationships become more valuable. AI drafts emails. It doesn’t build trust that makes clients call you first when problems surface. It doesn’t read subtext. It doesn’t repair relationships after projects go wrong. Businesses can lose long-term client relationships when they let AI handle “routine” communication that wasn’t actually routine.

As an individual, if your entire value add is from producing deliverables faster, AI makes you less valuable. However, if your value derives from judgment, relationships, and owning outcomes – you become more valuable than ever.

People who combine AI speed with human judgment will replace people who have neither the AI skills nor the judgment to compete. That dynamic is already playing out everywhere, whether you like it or not.

The AI Slop Problem (Your Brand Is Already At Risk)

Remember Spotify Wrapped? Turned into “AI-generated slop” in 2024 with made-up genres like “Pink Pilates Princess Strut Pop.”

Coca-Cola’s AI Christmas ad got branded “soulless.”

McDonald’s just (10 December 2025) pulled their AI Christmas ad after massive backlash. Multimillion-pound campaigns destroyed by customer backlash because someone thought AI could replace human creativity.

The pattern? Brands outsourced creativity to algorithms. Customers noticed. And they hated it.

When your marketing, client communications, and brand voice gets filtered through AI without human judgment, people can tell.

“AI slop” is what happens when businesses confuse speed with quality. Over half of long LinkedIn posts are now said to be AI-generated, according to industry analysis. Most are generic, repetitive, and instantly forgettable. Your competitors are flooding inboxes with this stuff right now.

That’s your opportunity – if you can show the difference between AI-assisted work and AI-generated waste.

The AI-to-AI Email Problem Nobody’s Talking About

The AI Echo Chamber

Here’s the scenario that should terrify you.

Your AI writes an email. My AI reads it and responds. Your AI reads my AI’s response and replies.

At what point are humans involved?

Research shows this creates catastrophic problems in business communications. When emails are 95% AI-generated, recipients sense something’s off. They doubt sincerity. They reinterpret past messages sceptically. AI-optimised “empathy” feels engineered and triggers defensiveness rather than connection.

The sender appears lazy or insincere. Informal influence evaporates. When leaders delegate emotional labour to algorithms, their guidance faces constant questioning.

Worst case? Someone accidentally copy-pastes the AI prompt into the email. It’s happened to lawyers, academics, and executives. You’ve exposed staged empathy, visible tone-tuning, proof you outsourced the relationship work.

Your clients aren’t stupid.

Where AI Actually Works

No onto the good stuff – what is AI brilliant at? It’s not complicated: AI does patterns. 

That’s it.

Drafting from notes when you’ve got a 20-minute recording and rough client call notes – AI structures them into something readable, you verify and ship. Legitimate time saved. Teams cut meeting follow-up time by 80%..

Repetitive communications next. Standard responses, status updates, RFP boilerplates your team hate writing. AI handles first drafts faster than junior staff. You edit for tone and accuracy. The blank-page problem disappears. Done.

Data pattern detection is where it gets interesting – customer complaints clustering around the same issue, login anomalies across 500 accounts, budget variances that don’t add up. AI spots these in seconds. You decide what they mean and what to do about them. AI flags the pattern, humans check the numbers, problem solved.

Document summarisation. Contracts, technical specs, industry reports. AI pulls key points without getting bored halfway through page 73. You verify against source material (you have to – never, ever skip this), but the extraction work is done. Format conversion – bullet points to prose, tables to narrative, one structure to another. AI handles mechanical transformation leaving you free to focus on whether content actually makes sense.

Everything else? You’re gambling with money you can’t afford to lose.

Which AI Model Does What

The various AI models are good at different things. They are not really substitutable if you want the best outcome in the least amount of time. 

  • Claude: writing where nuance matters – anything client-facing where tone could kill the deal
  • ChatGPT: speed – routine queries, getting unstuck, first drafts
  • Gemini: maths – if numbers matter, use this
  • DALL-E 3 and Midjourney: professional images

This is nowhere near an exhaustive list and people disagree but the point is that if you use the right tool you’ll do a better job. Some 45% of UK SMEs now use AI, up from 25% in 2022, and most are using the wrong tool for the job because they simply defaulted to whatever they had heard of. That costs money every single day.

Executive Decision Framework: Three Categories (But Life’s Messier Than This)

AI Governance Framework

Stop making this complicated. There are three categories.

Where AI should be mandatory

  • Routine drafting
  • Data summarization
  • Format conversion
  • Repetitive analysis.

If your team is still doing this manually, they’re wasting their time and your money. Train them. Measure adoption. Make it non-negotiable for pattern work. This should be 30-40% of knowledge work in most businesses.

Where AI should be absolutely forbidden

  • Client-sensitive data that can’t leave your systems
  • Compliance decisions with regulatory implications
  • Anything involving personal data under UK GDPR
  • Contract negotiations where AI might leak proprietary information
  • Security-critical decisions 

These aren’t about AI capability. They’re about risk and those data protection requirements you can’t compromise.

Where a named human must take final responsibility

  • Anything customer-facing
  • Anything contractual
  • Strategic recommendations
  • Budget decisions
  • Hiring, firing, performance reviews.

It’s OK if AI drafts as long as a specific person reviews, edits, and owns the outcome.

Not “the team.” Not “we used AI.” A named individual who understands the context, knows the relationships, can defend every word if challenged. 

If you can’t name who’s responsible, don’t send it.

But here’s where it can get messy – some stuff sits on the boundary. Client emails that are routine but mention sensitive projects, analysis that’s data-driven but has strategic implications, project updates that are factual but can land you in the middle of tense stakeholder politics.

Your team may argue about where things belong and that’s fine. Arguing means you’re thinking about it. Teams that spend 20 minutes debating whether a specific email should be AI-drafted or human-written often avoid six-figure mistakes.

Learn fast. Adjust the categories. The framework isn’t perfect but it’s better than chaos.

What will change over the next 3-5 years

AI will get faster, cheaper, and more integrated into every tool you use. That’s inevitable.

But here’s what nobody’s talking about: we still don’t know where the capability ceiling sits. Current models are brilliant at patterns but weak at reasoning. Next generation AI could close that gap. Or it may just make the same mistakes faster and more enthusiastically. Betting your business strategy on “AI will definitely get better at X” could be expensive if you’re wrong.

Mid-level knowledge work applying rules to standard situations will get automated. As will the vast majority of first drafts, Document review, data extraction, and routine analysis will need less human time applied. This is certain and you should plan for it.

What will becomes more valuable?

  • Problem framing. If you can’t brief AI clearly, its output is useless. That skill becomes as foundational as typing. Most people are terrible at it right now. The gap between “people who can get good output from AI” and “people who just paste vague questions” will define who stays valuable.
  • Exception handling. AI can handle 80% of routine cases while you handle the 20% that doesn’t fit patterns. Your value won’t come from high-volume routine work anymore – it’ll come from solving weird cases, navigating grey areas, and making judgment calls when the data doesn’t fit the model. That 20% is where all the margin lives.
  • Accountability. When AI drafts the contract and you approve it, you own the outcome and you can’t blame the tool or say “but the AI said…” This makes expertise more valuable, not less. The person who catches AI mistakes before they cause problems will be who businesses will pay serious money for.

The gap between people who use AI as a power tool and those who just forward AI outputs without thinking will define your competitive advantage; not access to AI, not budget for fancy tools. Just the ability to use it well and know when not to use it at all.

Why This Matters Right Now

Businesses treating AI as autopilot are hitting problems, including 

  • Professional-sounding outputs that miss the point
  • Client relationships damaged by tone-deaf communications
  • Security risks from data being uploaded without thinking
  • Productivity tools that somehow made work slower because nobody scrutinized the output 

Businesses treating AI as a power tool – faster at routine work, useless for judgment – are pulling ahead.

Less time on repetition, more time on expertise. They ship faster without sacrificing quality because they know exactly where AI adds value and where it destroys it. Some businesses have cut proposal writing time by 60% while maintaining client conversion rates because humans still handle the relationship-critical parts.

The competitive advantage isn’t having access to AI. Everyone has that.

It’s having people who know how to use it well, when not to use it at all, and how to catch mistakes before they reach clients. That’s not a technology problem. That’s judgment and training. And most UK businesses are terrible at both.

To get there you need AI governance that works. Not consultants who’ll charge £50k for a 100-page framework nobody reads. Not vendors who want to sell you £200k worth of tools you don’t need. Not “best practice” templates designed for American corporations with 50,000 employees.

We work with businesses to define practical policies that balance productivity with security and compliance. We don’t sell AI tools. We’ll help you use the ones you’ve got without creating security gaps or regulatory problems. Talk to our team if you want help that doesn’t waste time.

FAQs: AI in Business

1. What tasks is AI actually good at in business?

Pattern-based work. Drafting first versions, summarising documents, restructuring notes, spotting data trends, automating repetitive tasks. 45% of UK SMEs have integrated AI solutions, up from 25% in 2022. Use it for the boring stuff your team hates doing anyway.

2. Where does AI reliably fail in business contexts?

Context, judgment, and anything where relationships matter. Air Canada’s chatbot gave incorrect refund information and the airline paid. UnitedHealth’s AI had 90% of claim denials overturned on appeal. KPMG research shows 54% of UK workers made mistakes due to AI, and 58% relied on AI output without checking accuracy. If the work requires reading the room, understanding subtext, or navigating politics – AI will fail and make you look bad in the process.

3. What is “AI slop” and why should executives care?

Low-quality, generic AI-generated content that damages your brand. Research shows 54% negative sentiment toward AI-generated content in October 2025. Spotify, Coca-Cola, and McDonald’s faced backlash for AI-generated marketing that turned loyal fans into vocal critics. Your competitors are flooding inboxes with this garbage right now – that’s your opportunity if you can show the difference.

4. What are the risks of AI-generated business emails?

Recipients sense something’s off, doubt your sincerity, and perceive AI-optimised “empathy” as manipulation. Your informal influence evaporates. Worst case? Lost contracts worth six figures because AI-generated communications damaged trust enough that clients simply stopped returning calls.

5. How should UK businesses govern AI tool usage?

Three categories: mandatory (routine drafting), forbidden (client-sensitive data, compliance decisions), named human responsibility (customer-facing work). Given that 80% of UK AI projects fail and 57% of workers hide AI usage from management, clear governance isn’t optional. You’ll argue about the grey areas. Good – that means you’re thinking about it.

Share this on:

  • LinkedIn
  • Twitter
  • Facebook
Tagged under: ai, AI decision making, AI governance, artificial intelligence, business AI strategy, humans and ai

About Sagi

What you can read next

biggest risk microsoft 365 copilot
Your Company’s Biggest AI Risk Isn’t Security. It’s Preparation.
reasons hesitant deploy microsoft 365 copilot
Why Are You Waiting? The Real Reasons Decision-Makers Are Hesitant to Deploy AI
how to succeed with ai microsoft 365 copilot
Beyond the Hype: How to Succeed with AI, from Quick Wins to Total Transformation

You must be logged in to post a comment.

Featured Posts

  • how to succeed with ai microsoft 365 copilot

    Beyond the Hype: How to Succeed with AI, from Quick Wins to Total Transformation

  • cloud migration framework

    Cloud migration framework: strategy, execution and what happens after you move 

  • Hotel wifi problems are costing you guests

    Hotel WiFi Problems Are Costing You Guests 

  • Pre-Opening Hotel Checklist

    Hotel IT: The Pre-Opening Technology Checklist 

Archives

  • April 2026
  • March 2026
  • February 2026
  • January 2026
  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • April 2025
  • June 2024
  • April 2024
  • February 2024
  • January 2024
  • October 2023
  • September 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017

Categories

  • Artificial Intelligence
  • Bam's Blog
  • Customers
  • Cyber Security
  • Events
  • GDPR
  • Guidance
  • IT Consultancy
  • IT Support
  • Managed IT
  • Press Release
  • Recruitment
  • Team
  • Uncategorised
  • USA
  • What is

Tags

ai artificial intelligence Business Business Continuity Christmas Christmas Party Cloud Computing Cloud Hosting Compliance coronavirus Covid 19 Cyber Awareness cyber crime Cyber Risk Cyber Security Cyber Threat Data Backups Disaster Recovery GDPR Halloween Hospitality Hotel Hotel IT Services Hotel IT Solutions Hotel IT Support Hotels Hotel Technology IT infrastructure IT Services IT Support Microsoft Microsoft365 Microsoft 365 Copilot Migration Outsourced IT Support Remote Working Security Software Team Team Event Windows 10 Windows 10 End of Life Windows 10 Upgrade Windows 11 Windows 11 Upgrade

Cardonet Twitter

Could not authenticate you.
TOP

We will help you overcome your technology challenges

Call us on +1 323 984 8908, email us at or fill out the following form to start the conversation.

",

For further information on how we process your data, please refer to our Privacy Policy.

IT Solutions

  • IT Solutions by Industry
  • Business IT Challenges

IT Services

  • IT Support
  • IT Consultancy
  • Managed IT
  • Managed Cloud
  • Communication
  • Cyber Security

About

  • Why Cardonet
  • Meet our Team
  • News
  • Insight
  • Case Studies
  • Careers

Contact

  • +44 207 837 2444
  • +1 323 984 8908
  • Change Region
Cardonet 26 years proudly supporting our customer
  •  
  •  
  • 750 N. San Vicente Blvd, Los Angeles, CA 90069
Cardonet IT Support and IT Services
Change Region
  • United Kingdom and Europe
  • United States and International

© 1999 - 2023 All rights reserved.

  • Sitemap
  • Terms and Conditions
  • Privacy Policy
  • GDPR
  • Accessibility Statement
  • Corporate Social Responsibility
  • Environmental Policy
Contact TOP
Cardonet
Cardonet Consultancy Limited 7 Stean Street London, Greater London E8 4ED
London Map +442030342244
Cardonet US Inc 750 N. San Vicente Blvd, West Hollywood Los Angeles, California 90069
Los Angeles Map +13239848908
Home Cardonet IT Support Logo