{"id":4714,"date":"2026-02-20T04:56:27","date_gmt":"2026-02-20T12:56:27","guid":{"rendered":"https:\/\/cardonet.com\/news\/?p=4714"},"modified":"2026-02-13T08:00:20","modified_gmt":"2026-02-13T16:00:20","slug":"ai-capabilities-limitations-business-decision-framework","status":"publish","type":"post","link":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/","title":{"rendered":"Humans v AI: What AI Gets Right, What It Gets Catastrophically Wrong, and Why Humans Still Matter"},"content":{"rendered":"\n<p>Can AI handle the work you&#8217;re paying knowledge workers to do?<\/p>\n\n\n\n<p>Sometimes. When the task is pure pattern work &#8211; drafting emails, summarizing documents, spotting data anomalies &#8211; AI tools like ChatGPT, Claude, Gemini, and Microsoft Copilot are faster than any human.<\/p>\n\n\n\n<p>But they collapse the moment work requires context they can&#8217;t see, judgment about relationships they don&#8217;t understand, or decisions where &#8220;technically correct&#8221; tanks your client relationship.<\/p>\n\n\n\n<p>Here&#8217;s what executives are learning: the line between &#8220;AI makes this faster&#8221; and &#8220;AI just created a mess I need to clean up&#8221; is thinner than vendors admit.<\/p>\n\n\n\n<p>And\u00a0<a href=\"https:\/\/www.aicerts.ai\/news\/why-ai-slop-dominates-2025-discourse\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">57% of your employees are already using AI<\/a>\u00a0without telling you.<\/p>\n\n\n\n<p>That&#8217;s a governance crisis, not an adoption problem.&nbsp;<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>When AI fails (and what this costs)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-1024x683.png\" alt=\"The High Cost of AI Failure\" class=\"wp-image-4716\" title=\"The High Cost of AI Failure\" srcset=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-1024x683.png 1024w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-300x200.png 300w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-768x512.png 768w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-280x187.png 280w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet-1170x780.png 1170w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/high-cost-artificial-intelligence-failure-cardonet.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>This is by no means an anti-AI rant. I\u2019m a fan, and I see the value in it when it is used properly. However, I also feel that Mark Zukerberg\u2019s maxim of&nbsp;\u201cmove fast and break things\u201d&nbsp;is great until the thing it breaks is your company\u2019s reputation and ability to operate.<\/p>\n\n\n\n<p>Let&#8217;s start with some real-world examples of how AI leads to catastrophe if badly implemented.<\/p>\n\n\n\n<p>In the US:<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>A New York lawyer used ChatGPT to research legal precedents and presented the courts with\u00a0<a href=\"https:\/\/www.nytimes.com\/2023\/05\/27\/nyregion\/avianca-airline-lawsuit-chatgpt.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">six fake court cases<\/a>\u00a0&#8211; wrong names, case numbers, citations. The judge sanctioned him. His reputation? Damaged. His opponents? Laughing.<\/li>\n\n\n\n<li>Air Canada&#8217;s chatbot lied to a customer about their refund and contradicted the airline&#8217;s actual policy. A tribunal ruled that it was responsible for all the information on its website, including AI mistakes. Air Canada paid in full plus legal costs.<\/li>\n\n\n\n<li>A Chevrolet dealership chatbot agreed to sell a new Tahoe for one dollar as a &#8220;legally binding offer.&#8221; Weak safeguards and no human oversight led to a predictable disaster.<\/li>\n\n\n\n<li>UnitedHealth deployed AI to determine nursing home coverage for elderly patients. When patients or doctors appealed,\u00a090% of the AI&#8217;s denials were overturned\u00a0because the system couldn&#8217;t account for individual patient context that medical professionals saw immediately.<\/li>\n\n\n\n<li>McDonald&#8217;s\u00a0<a href=\"https:\/\/www.cio.com\/article\/190888\/5-famous-analytics-and-ai-disasters.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">ended its three-year partnership with IBM<\/a>\u00a0for AI-powered drive-thru ordering because the system couldn&#8217;t handle how people actually speak or cope with background noise and unexpected requests.<\/li>\n\n\n\n<li>Zillow&#8217;s AI misjudged home prices so badly the company recorded a $304 million loss and laid off 25% of staff. The algorithm couldn&#8217;t factor in real-world constraints and market dynamics that human analysts would catch immediately. That&#8217;s not an AI problem. That&#8217;s a &#8220;we trusted AI to do work it can&#8217;t do&#8221; problem.<\/li>\n<\/ul>\n\n\n\n<p><a href=\"https:\/\/kpmg.com\/uk\/en\/media\/press-releases\/2025\/04\/majority-of-uk-public.html\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">KPMG research on UK workers shows 54% made mistakes<\/a>\u00a0in their work due to AI, 58% relied on AI output without checking accuracy, and 38%\u00a0<em>admitted<\/em>\u00a0using AI inappropriately &#8211; uploading copyrighted information, client data, sensitive material.<\/p>\n\n\n\n<p>That&#8217;s not edge cases. That&#8217;s over half your workforce gambling with your data.<\/p>\n\n\n\n<p><a href=\"https:\/\/www.consultancy.uk\/news\/41891\/consultants-must-not-be-caught-sleeping-at-the-wheel-with-ai-generated-input\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">UK government AI systems have caused widespread glitches<\/a>\u00a0in eligibility programmes through costly delays and systemic failures. The consultancies implementing AI-powered systems for government programmes seem to have created more problems than they solved, while charging millions for the privilege.<\/p>\n\n\n\n<p>Globally,\u00a0MIT research shows 95% of enterprise AI pilots fail\u00a0to create measurable value.<\/p>\n\n\n\n<p>That&#8217;s not &#8220;early adopter problems.&#8221; That&#8217;s executives who didn&#8217;t do their homework. Over 80% of AI implementations fail within six months. In the UK specifically,\u00a0<a href=\"https:\/\/www.business-reporter.co.uk\/ai--automation\/a-failure-to-lead-the-uks-ai-sector\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">roughly 80% of AI projects fail<\/a>\u00a0primarily due to skills gaps, poor infrastructure, or unclear ROI calculations.<\/p>\n\n\n\n<p>Ninety-one percent of UK business leaders report that poor data quality negatively affects their operations and limits AI effectiveness. You can&#8217;t build reliable AI on unreliable data. And businesses have lost big contracts because AI-generated client communications were tone-deaf enough to damage trust.&nbsp;<\/p>\n\n\n\n<p>You can&#8217;t enter that cost in a spreadsheet, but you&#8217;ll feel it in future revenue.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The Skills Your Executives Need to Ensure This Doesn&#8217;t Happen to You<\/strong><\/h2>\n\n\n\n<p>You don&#8217;t need everyone to become prompt engineers. You need people who can brief AI like they&#8217;d brief a very fast analyst with no common sense and zero ability to ask clarifying questions.<\/p>\n\n\n\n<p>Most executives are terrible at this.<\/p>\n\n\n\n<p>They treat AI like Google &#8211; vague question, accept first answer. When the answer is wrong and gets embedded into client work, you end up burning money fixing problems AI created and then burning more apologising to clients. Then even more replacing the clients who left.<\/p>\n\n\n\n<p>Here&#8217;s what matters:&nbsp;<strong>critical thinking.<\/strong>&nbsp;When AI generates plausible-sounding answers, your job is to ask &#8211; is this actually right? Does this answer the real question or the one I accidentally asked? What assumptions is this built on? Where does this break in practice?<\/p>\n\n\n\n<p><a href=\"https:\/\/www.iod.com\/news\/science-innovation-and-tech\/major-blockers-to-ai-adoption-in-british-business\/\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">Half of UK business leaders cite<\/a>\u00a0limited expertise at management and board level as their biggest concern. 50% lack trust in AI outcomes. 40% worry about security risks and employee skills gaps.<\/p>\n\n\n\n<p>You know what that tells me?<\/p>\n\n\n\n<p>Half your leadership doesn&#8217;t understand the tool they&#8217;re deploying across the organisation. That&#8217;s like giving everyone in your company access to the company bank account and hoping they&#8217;ll be sensible with it.<\/p>\n\n\n\n<p><strong>Technical understanding matters.<\/strong>&nbsp;Not coding. But understanding what AI does: pattern matching, not reasoning. If you treat AI as if it thinks, you&#8217;ll trust it when it&#8217;s guessing and hallucinating. That can be an expensive mistake, the kind that shows up in your P&amp;L and makes your CFO uncomfortable.<\/p>\n\n\n\n<p><strong>Client relationships become more valuable.<\/strong>&nbsp;AI drafts emails. It doesn&#8217;t build trust that makes clients call you first when problems surface. It doesn&#8217;t read subtext. It doesn&#8217;t repair relationships after projects go wrong. Businesses can lose long-term client relationships when they let AI handle &#8220;routine&#8221; communication that wasn&#8217;t actually routine.<\/p>\n\n\n\n<p>As an individual, if your entire value add is from producing deliverables faster, AI makes you less valuable. However, if your value derives from judgment, relationships, and owning outcomes \u2013 you become more valuable than ever.<\/p>\n\n\n\n<p>People who combine AI speed with human judgment will replace people who have neither the AI skills nor the judgment to compete. That dynamic is already playing out everywhere, whether you like it or not.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The AI Slop Problem (Your Brand Is Already At Risk)<\/strong><\/h2>\n\n\n\n<p>Remember Spotify Wrapped? Turned into\u00a0<a href=\"https:\/\/www.raconteur.net\/technology\/ai-slop-flops-2025-oped\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">&#8220;AI-generated slop&#8221;<\/a>\u00a0in 2024 with made-up genres like &#8220;Pink Pilates Princess Strut Pop.&#8221;<\/p>\n\n\n\n<p>Coca-Cola&#8217;s AI Christmas ad got branded &#8220;soulless.&#8221;<\/p>\n\n\n\n<p>McDonald&#8217;s\u00a0<a href=\"https:\/\/www.bbc.co.uk\/news\/articles\/czdgrnvp082o\" target=\"_blank\" rel=\"noreferrer noopener nofollow\">just (10 December 2025) pulled their AI Christmas ad<\/a>\u00a0after massive backlash. Multimillion-pound campaigns destroyed by customer backlash because someone thought AI could replace human creativity.<\/p>\n\n\n\n<p>The pattern? Brands outsourced creativity to algorithms. Customers noticed. And they hated it.<\/p>\n\n\n\n<p>When your marketing, client communications, and brand voice gets filtered through AI without human judgment, people can tell.<\/p>\n\n\n\n<p>&#8220;AI slop&#8221; is what happens when businesses confuse speed with quality. Over half of long LinkedIn posts are now said to be AI-generated, according to industry analysis. Most are generic, repetitive, and instantly forgettable. Your competitors are flooding inboxes with this stuff right now.<\/p>\n\n\n\n<p>That&#8217;s your opportunity &#8211; if you can show the difference between AI-assisted work and AI-generated waste.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>The AI-to-AI Email Problem Nobody&#8217;s Talking About<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"683\" src=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-1024x683.png\" alt=\"The AI Echo Chamber\" class=\"wp-image-4718\" title=\"The AI Echo Chamber\" srcset=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-1024x683.png 1024w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-300x200.png 300w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-768x512.png 768w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-280x187.png 280w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet-1170x780.png 1170w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/artificial-intelligence-echo-chamber-cardonet.png 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Here&#8217;s the scenario that should terrify you.<\/p>\n\n\n\n<p>Your AI writes an email. My AI reads it and responds. Your AI reads my AI&#8217;s response and replies.<\/p>\n\n\n\n<p>At what point are humans involved?<\/p>\n\n\n\n<p>Research shows this creates catastrophic problems in business communications. When emails are 95% AI-generated, recipients sense something&#8217;s off. They doubt sincerity. They reinterpret past messages sceptically. AI-optimised &#8220;empathy&#8221; feels engineered and triggers defensiveness rather than connection.<\/p>\n\n\n\n<p>The sender appears lazy or insincere. Informal influence evaporates. When leaders delegate emotional labour to algorithms, their guidance faces constant questioning.<\/p>\n\n\n\n<p>Worst case? Someone accidentally copy-pastes the AI prompt into the email. It&#8217;s happened to lawyers, academics, and executives. You&#8217;ve exposed staged empathy, visible tone-tuning, proof you outsourced the relationship work.<\/p>\n\n\n\n<p>Your clients aren&#8217;t stupid.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Where AI Actually Works<\/strong><\/h2>\n\n\n\n<p>No onto the good stuff \u2013 what is AI brilliant at? It\u2019s not complicated: AI does patterns.&nbsp;<\/p>\n\n\n\n<p>That&#8217;s it.<\/p>\n\n\n\n<p><strong>Drafting from notes<\/strong>&nbsp;when you&#8217;ve got a 20-minute recording and rough client call notes &#8211; AI structures them into something readable, you verify and ship. Legitimate time saved. Teams cut meeting follow-up time by 80%..<\/p>\n\n\n\n<p><strong>Repetitive communications<\/strong>&nbsp;next. Standard responses, status updates, RFP boilerplates your team hate writing. AI handles first drafts faster than junior staff. You edit for tone and accuracy. The blank-page problem disappears. Done.<\/p>\n\n\n\n<p><strong>Data pattern detection<\/strong>&nbsp;is where it gets interesting &#8211; customer complaints clustering around the same issue, login anomalies across 500 accounts, budget variances that don&#8217;t add up. AI spots these in seconds. You decide what they mean and what to do about them. AI flags the pattern, humans check the numbers, problem solved.<\/p>\n\n\n\n<p><strong>Document summarisation<\/strong>. Contracts, technical specs, industry reports. AI pulls key points without getting bored halfway through page 73. You verify against source material (you have to \u2013 never, ever skip this), but the extraction work is done. Format conversion &#8211; bullet points to prose, tables to narrative, one structure to another. AI handles mechanical transformation leaving you free to focus on whether content actually makes sense.<\/p>\n\n\n\n<p>Everything else? You&#8217;re gambling with money you can&#8217;t afford to lose.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Which AI Model Does What<\/strong><\/h2>\n\n\n\n<p>The various AI models are good at different things. They are not really substitutable if you want the best outcome in the least amount of time.&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Claude: writing where nuance matters &#8211; anything client-facing where tone could kill the deal<\/li>\n\n\n\n<li>ChatGPT: speed &#8211; routine queries, getting unstuck, first drafts<\/li>\n\n\n\n<li>Gemini: maths &#8211; if numbers matter, use this<\/li>\n\n\n\n<li>DALL-E 3 and Midjourney: professional images<\/li>\n<\/ul>\n\n\n\n<p>This is nowhere near an exhaustive list and people disagree but the point is that if you use the right tool you\u2019ll do a better job. Some 45% of UK SMEs now use AI, up from 25% in 2022, and most are using the wrong tool for the job because they simply defaulted to whatever they had heard of. That costs money every single day.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Executive Decision Framework: Three Categories (But Life&#8217;s Messier Than This)<\/strong><\/h2>\n\n\n\n<figure class=\"wp-block-image size-large\"><img loading=\"lazy\" decoding=\"async\" width=\"1024\" height=\"631\" src=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-1024x631.jpg\" alt=\"AI Governance Framework\" class=\"wp-image-4717\" title=\"AI Governance Framework\" srcset=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-1024x631.jpg 1024w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-300x185.jpg 300w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-768x474.jpg 768w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-280x173.jpg 280w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet-1170x722.jpg 1170w, https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-governance-framework-cardonet.jpg 1200w\" sizes=\"auto, (max-width: 1024px) 100vw, 1024px\" \/><\/figure>\n\n\n\n<p>Stop making this complicated. There are three categories.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Where AI should be mandatory<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Routine drafting<\/li>\n\n\n\n<li>Data summarization<\/li>\n\n\n\n<li>Format conversion<\/li>\n\n\n\n<li>Repetitive analysis.<\/li>\n<\/ul>\n\n\n\n<p>If your team is still doing this manually, they&#8217;re wasting their time and your money. Train them. Measure adoption. Make it non-negotiable for pattern work. This should be 30-40% of knowledge work in most businesses.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Where AI should be absolutely forbidden<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Client-sensitive data that can&#8217;t leave your systems<\/li>\n\n\n\n<li>Compliance decisions with regulatory implications<\/li>\n\n\n\n<li>Anything involving personal data under UK GDPR<\/li>\n\n\n\n<li>Contract negotiations where AI might leak proprietary information<\/li>\n\n\n\n<li>Security-critical decisions\u00a0<\/li>\n<\/ul>\n\n\n\n<p>These aren&#8217;t about AI capability. They&#8217;re about risk and those\u00a0<a href=\"https:\/\/www.cardonet.co.uk\/managed-cyber-security-services.php\">data protection requirements<\/a>\u00a0you can&#8217;t compromise.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\"><strong>Where a named human must take final responsibility<\/strong><\/h3>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Anything customer-facing<\/li>\n\n\n\n<li>Anything contractual<\/li>\n\n\n\n<li>Strategic recommendations<\/li>\n\n\n\n<li>Budget decisions<\/li>\n\n\n\n<li>Hiring, firing, performance reviews.<\/li>\n<\/ul>\n\n\n\n<p>It\u2019s OK if AI drafts as long as a specific person reviews, edits, and owns the outcome.<\/p>\n\n\n\n<p>Not &#8220;the team.&#8221; Not &#8220;we used AI.&#8221; A named individual who understands the context, knows the relationships, can defend every word if challenged.&nbsp;<\/p>\n\n\n\n<p>If you can&#8217;t name who&#8217;s responsible, don&#8217;t send it.<\/p>\n\n\n\n<p>But here&#8217;s where it can get messy &#8211; some stuff sits on the boundary. Client emails that are routine but mention sensitive projects, analysis that&#8217;s data-driven but has strategic implications, project updates that are factual but can land you in the middle of tense stakeholder politics.<\/p>\n\n\n\n<p>Your team may argue about where things belong and that&#8217;s fine. Arguing means you&#8217;re thinking about it. Teams that spend 20 minutes debating whether a specific email should be AI-drafted or human-written often avoid six-figure mistakes.<\/p>\n\n\n\n<p>Learn fast. Adjust the categories. The framework isn&#8217;t perfect but it&#8217;s better than chaos.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>What will change over the next 3-5 years<\/strong><\/h2>\n\n\n\n<p>AI will get faster, cheaper, and more integrated into every tool you use. That\u2019s inevitable.<\/p>\n\n\n\n<p>But here&#8217;s what nobody&#8217;s talking about: we still don&#8217;t know where the capability ceiling sits. Current models are brilliant at patterns but weak at reasoning. Next generation AI could close that gap. Or it may just make the same mistakes faster and more enthusiastically. Betting your business strategy on &#8220;AI will definitely get better at X&#8221; could be expensive if you&#8217;re wrong.<\/p>\n\n\n\n<p>Mid-level knowledge work applying rules to standard situations will get automated. As will the vast majority of first drafts, Document review, data extraction, and routine analysis will need less human time applied. This is certain and you should plan for it.<\/p>\n\n\n\n<p>What will becomes more valuable?<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li><strong>Problem framing.<\/strong>\u00a0If you can&#8217;t brief AI clearly, its output is useless. That skill becomes as foundational as typing. Most people are terrible at it right now. The gap between &#8220;people who can get good output from AI&#8221; and &#8220;people who just paste vague questions&#8221; will define who stays valuable.<\/li>\n\n\n\n<li><strong>Exception handling.<\/strong>\u00a0AI can handle 80% of routine cases while you handle the 20% that doesn&#8217;t fit patterns. Your value won\u2019t come from high-volume routine work anymore &#8211; it&#8217;ll come from solving weird cases, navigating grey areas, and making judgment calls when the data doesn&#8217;t fit the model. That 20% is where all the margin lives.<\/li>\n\n\n\n<li><strong>Accountability.<\/strong>\u00a0When AI drafts the contract and you approve it, you own the outcome and you can&#8217;t blame the tool or say &#8220;but the AI said&#8230;&#8221; This makes expertise more valuable, not less. The person who catches AI mistakes before they cause problems will be who businesses will pay serious money for.<\/li>\n<\/ul>\n\n\n\n<p>The gap between people who use AI as a power tool and those who just forward AI outputs without thinking will define your competitive advantage; not access to AI, not budget for fancy tools. Just the ability to use it well and know when not to use it at all.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>Why This Matters Right Now<\/strong><\/h2>\n\n\n\n<p>Businesses treating AI as autopilot are hitting problems, including&nbsp;<\/p>\n\n\n\n<ul class=\"wp-block-list\">\n<li>Professional-sounding outputs that miss the point<\/li>\n\n\n\n<li>Client relationships damaged by tone-deaf communications<\/li>\n\n\n\n<li>Security risks from data being uploaded without thinking<\/li>\n\n\n\n<li>Productivity tools that somehow made work slower because nobody scrutinized the output\u00a0<\/li>\n<\/ul>\n\n\n\n<p>Businesses treating AI as a power tool &#8211; faster at routine work, useless for judgment &#8211; are pulling ahead.<\/p>\n\n\n\n<p>Less time on repetition, more time on expertise. They ship faster without sacrificing quality because they know exactly where AI adds value and where it destroys it. Some businesses have cut proposal writing time by 60% while maintaining client conversion rates because humans still handle the relationship-critical parts.<\/p>\n\n\n\n<p>The competitive advantage isn&#8217;t having access to AI.&nbsp;Everyone has that.<\/p>\n\n\n\n<p>It&#8217;s having people who know how to use it well, when not to use it at all, and how to catch mistakes before they reach clients. That&#8217;s not a technology problem. That&#8217;s judgment and training. And most UK businesses are terrible at both.<\/p>\n\n\n\n<p>To get there you need AI governance that works. Not consultants who&#8217;ll charge \u00a350k for a 100-page framework nobody reads. Not vendors who want to sell you \u00a3200k worth of tools you don&#8217;t need. Not &#8220;best practice&#8221; templates designed for American corporations with 50,000 employees.<\/p>\n\n\n\n<p>We work with businesses to define practical policies that balance productivity with security and compliance. We don&#8217;t sell AI tools. We\u2019ll help you use the ones you&#8217;ve got without creating\u00a0<a href=\"https:\/\/www.cardonet.co.uk\/services\/cybersecurity\">security gaps<\/a>\u00a0or regulatory problems.\u00a0<a href=\"https:\/\/www.cardonet.co.uk\/contact-it-services.php\">Talk to our team<\/a>\u00a0if you want help that doesn&#8217;t waste time.<\/p>\n\n\n\n<h2 class=\"wp-block-heading\"><strong>FAQs: AI in Business<\/strong><\/h2>\n\n\n\n<h3 class=\"wp-block-heading\">1. <strong>What tasks is AI actually good at in business?<\/strong><\/h3>\n\n\n\n<p>Pattern-based work. Drafting first versions, summarising documents, restructuring notes, spotting data trends, automating repetitive tasks. 45% of UK SMEs have integrated AI solutions, up from 25% in 2022. Use it for the boring stuff your team hates doing anyway.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">2. Where does AI reliably fail in business contexts?<\/h3>\n\n\n\n<p>Context, judgment, and anything where relationships matter. Air Canada&#8217;s chatbot gave incorrect refund information and the airline paid. UnitedHealth&#8217;s AI had 90% of claim denials overturned on appeal. KPMG research shows 54% of UK workers made mistakes due to AI, and 58% relied on AI output without checking accuracy. If the work requires reading the room, understanding subtext, or navigating politics &#8211; AI will fail and make you look bad in the process.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">3. What is &#8220;AI slop&#8221; and why should executives care?<\/h3>\n\n\n\n<p>Low-quality, generic AI-generated content that damages your brand. Research shows 54% negative sentiment toward AI-generated content in October 2025. Spotify, Coca-Cola, and McDonald&#8217;s faced backlash for AI-generated marketing that turned loyal fans into vocal critics. Your competitors are flooding inboxes with this garbage right now &#8211; that&#8217;s your opportunity if you can show the difference.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">4. <strong>What are the risks of AI-generated business emails?<\/strong><\/h3>\n\n\n\n<p>Recipients sense something&#8217;s off, doubt your sincerity, and perceive AI-optimised &#8220;empathy&#8221; as manipulation. Your informal influence evaporates. Worst case? Lost contracts worth six figures because AI-generated communications damaged trust enough that clients simply stopped returning calls.<\/p>\n\n\n\n<h3 class=\"wp-block-heading\">5. <strong>How should UK businesses govern AI tool usage?<\/strong><\/h3>\n\n\n\n<p>Three categories: mandatory (routine drafting), forbidden (client-sensitive data, compliance decisions), named human responsibility (customer-facing work). Given that 80% of UK AI projects fail and 57% of workers hide AI usage from management, clear governance isn&#8217;t optional. You&#8217;ll argue about the grey areas. Good &#8211; that means you&#8217;re thinking about it.<\/p>\n","protected":false},"excerpt":{"rendered":"<p>Can AI handle the work you&#8217;re paying knowledge workers to do? Sometimes. When the task is pure pattern work &#8211; drafting emails, summarizing documents, spotting data anomalies &#8211; AI tools like ChatGPT, Claude, Gemini, and Microsoft Copilot are faster than any human. But they collapse the moment work requires context they can&#8217;t see, judgment about<\/p>\n","protected":false},"author":2,"featured_media":4715,"comment_status":"open","ping_status":"open","sticky":false,"template":"","format":"standard","meta":{"footnotes":""},"categories":[976],"tags":[978,981,980,977,982,979],"class_list":["post-4714","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-artificial-intelligence","tag-ai","tag-ai-decision-making","tag-ai-governance","tag-artificial-intelligence","tag-business-ai-strategy","tag-humans-and-ai"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v27.4 - https:\/\/yoast.com\/product\/yoast-seo-wordpress\/ -->\n<title>Humans and AI: What AI Gets Right, What It Gets Wrong, and Why Humans Still Matter<\/title>\n<meta name=\"description\" content=\"AI excels at patterns but fails at judgment. Real UK examples show 80% failure rate and \u00a32.7M breach costs. Learn where to use AI and where it&#039;ll cost you.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/\" \/>\n<meta property=\"og:locale\" content=\"en_US\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Humans and AI: What AI Gets Right, What It Gets Wrong, and Why Humans Still Matter\" \/>\n<meta property=\"og:description\" content=\"AI excels at patterns but fails at judgment. Real UK examples show 80% failure rate and \u00a32.7M breach costs. Learn where to use AI and where it&#039;ll cost you.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/\" \/>\n<meta property=\"og:site_name\" content=\"News\" \/>\n<meta property=\"article:published_time\" content=\"2026-02-20T12:56:27+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png\" \/>\n\t<meta property=\"og:image:width\" content=\"600\" \/>\n\t<meta property=\"og:image:height\" content=\"334\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/png\" \/>\n<meta name=\"author\" content=\"Sagi\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"Written by\" \/>\n\t<meta name=\"twitter:data1\" content=\"Sagi\" \/>\n\t<meta name=\"twitter:label2\" content=\"Est. reading time\" \/>\n\t<meta name=\"twitter:data2\" content=\"1 minute\" \/>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Humans and AI: What AI Gets Right, What It Gets Wrong, and Why Humans Still Matter","description":"AI excels at patterns but fails at judgment. Real UK examples show 80% failure rate and \u00a32.7M breach costs. Learn where to use AI and where it'll cost you.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/","og_locale":"en_US","og_type":"article","og_title":"Humans and AI: What AI Gets Right, What It Gets Wrong, and Why Humans Still Matter","og_description":"AI excels at patterns but fails at judgment. Real UK examples show 80% failure rate and \u00a32.7M breach costs. Learn where to use AI and where it'll cost you.","og_url":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/","og_site_name":"News","article_published_time":"2026-02-20T12:56:27+00:00","og_image":[{"width":600,"height":334,"url":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png","type":"image\/png"}],"author":"Sagi","twitter_card":"summary_large_image","twitter_misc":{"Written by":"Sagi","Est. reading time":"1 minute"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#article","isPartOf":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/"},"author":{"name":"Sagi","@id":"https:\/\/cardonet.com\/news\/#\/schema\/person\/402defdb075c0a6c1317a1b8fdf85481"},"headline":"Humans v AI: What AI Gets Right, What It Gets Catastrophically Wrong, and Why Humans Still Matter","datePublished":"2026-02-20T12:56:27+00:00","mainEntityOfPage":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/"},"wordCount":2913,"commentCount":0,"publisher":{"@id":"https:\/\/cardonet.com\/news\/#organization"},"image":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#primaryimage"},"thumbnailUrl":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png","keywords":["ai","AI decision making","AI governance","artificial intelligence","business AI strategy","humans and ai"],"articleSection":["Artificial Intelligence"],"inLanguage":"en-US","potentialAction":[{"@type":"CommentAction","name":"Comment","target":["https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#respond"]}]},{"@type":"WebPage","@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/","url":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/","name":"Humans and AI: What AI Gets Right, What It Gets Wrong, and Why Humans Still Matter","isPartOf":{"@id":"https:\/\/cardonet.com\/news\/#website"},"primaryImageOfPage":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#primaryimage"},"image":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#primaryimage"},"thumbnailUrl":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png","datePublished":"2026-02-20T12:56:27+00:00","description":"AI excels at patterns but fails at judgment. Real UK examples show 80% failure rate and \u00a32.7M breach costs. Learn where to use AI and where it'll cost you.","breadcrumb":{"@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#breadcrumb"},"inLanguage":"en-US","potentialAction":[{"@type":"ReadAction","target":["https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/"]}]},{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#primaryimage","url":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png","contentUrl":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2026\/02\/ai-capabilities-limitations-business-decision-framework-cardonet.png","width":600,"height":334,"caption":"Business Decision Framework AI Capabilities Limitations"},{"@type":"BreadcrumbList","@id":"https:\/\/cardonet.com\/news\/ai-capabilities-limitations-business-decision-framework\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"News Home","item":"https:\/\/cardonet.com\/news\/"},{"@type":"ListItem","position":2,"name":"Humans v AI: What AI Gets Right, What It Gets Catastrophically Wrong, and Why Humans Still Matter"}]},{"@type":"WebSite","@id":"https:\/\/cardonet.com\/news\/#website","url":"https:\/\/cardonet.com\/news\/","name":"News","description":"IT Services from Cardonet","publisher":{"@id":"https:\/\/cardonet.com\/news\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/cardonet.com\/news\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"en-US"},{"@type":"Organization","@id":"https:\/\/cardonet.com\/news\/#organization","name":"Cardonet","url":"https:\/\/cardonet.com\/news\/","logo":{"@type":"ImageObject","inLanguage":"en-US","@id":"https:\/\/cardonet.com\/news\/#\/schema\/logo\/image\/","url":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2018\/06\/it-support-london-cardonet.png","contentUrl":"https:\/\/cardonet.com\/news\/wp-content\/uploads\/2018\/06\/it-support-london-cardonet.png","width":1920,"height":1080,"caption":"Cardonet"},"image":{"@id":"https:\/\/cardonet.com\/news\/#\/schema\/logo\/image\/"}},{"@type":"Person","@id":"https:\/\/cardonet.com\/news\/#\/schema\/person\/402defdb075c0a6c1317a1b8fdf85481","name":"Sagi","sameAs":["http:\/\/www.cardonet.co.uk"]}]}},"_links":{"self":[{"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/posts\/4714","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/users\/2"}],"replies":[{"embeddable":true,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/comments?post=4714"}],"version-history":[{"count":1,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/posts\/4714\/revisions"}],"predecessor-version":[{"id":4719,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/posts\/4714\/revisions\/4719"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/media\/4715"}],"wp:attachment":[{"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/media?parent=4714"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/categories?post=4714"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/cardonet.com\/news\/wp-json\/wp\/v2\/tags?post=4714"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}