• Jump to contents
  • Jump to main navigation
  • Jump to site map
  • News
  • Insight
  • Careers
  • Support
  • Book a Meeting
  • Contact Us Now
  • Book a Meeting
  • Contact Us Now
  • +44 207 837 2444
  • US and International: +1 323 984 8908
  • Change Region
  • +1 323 984 8908
  • Change Region

Cardonet IT Support for Business

Cardonet are a consultative business partner who will work closely with you to provide a transparent, vendor-neutral approach to your IT Services.

+44 203 034 2244
7 Stean Street, London, E8 4ED

+1 323 984 8908
750 N. San Vicente Blvd, Los Angeles, CA 90069

  • Home
  • IT Solutions
    • Industry Sector IT Solutions
      • Hospitality
        • Hotels
        • Hotel Management
        • Restaurants
        • Pub & Bars
      • Finance Associations
      • Manufacturing
      • Media and Creative
        • Marketing Agencies
        • Public Relations and Communications Agencies
        • Design Agencies
        • Advertising Agencies
        • Market Research Agencies
        • Entertainment
      • Charity
      • Education
    • Business IT Challenges
      • Remote and Hybrid Working
      • IT Outsourcing
      • IT Cost Optimisation
      • Office Move and IT Relocation
      • Global Technology Operations
      • Global IT Helpdesk
      • Cyber Security Journey
      • Technology Compliance
      • Multi-site IT Operations
      • GDPR Compliance
      • PCI DSS Compliance
  • IT Services
    • IT Support
      • 24x7 Service Desk
      • 24x7 Network Monitoring
      • IT Service Delivery
      • Proactive IT Support
      • Remote IT Support
      • Onsite IT Support
      • Out of Hours IT Support
      • Dedicated Service Desk
      • Network Support
      • Microsoft Support
      • Apple Mac Support
      • Business IT Support
    • IT Consultancy
      • IT Strategy
      • IT Projects
      • IT Audits
      • Software Licensing
      • IT Infrastructure
      • IT Procurement
      • IT Supplier Management
      • IT Security
      • IT Networks and Cabling
      • Cloud Readiness
      • Virtualisation
      • Backup and Continuity
    • Managed IT
      • Managed Networks
      • Managed Hosting
      • Managed Backups
      • Business Continuity
    • Managed Cloud
      • Private Cloud
      • Hybrid Cloud
      • Public Cloud
    • Communication
      • Onsite Telephone System
      • Hybrid Telephone System
      • Cloud Telephone System
      • Contact Centre
      • Video Conferencing
      • SIP Trunking
      • Lines and Calls
    • Cyber Security
      • Cyber Security Audit
      • Managed Cyber Security
      • Cyber Compliance
  • About
    • About Cardonet
      • Why Cardonet?
      • News
      • Insight
      • Management Team
      • Case Studies
      • Customers
      • Technology Partners
      • Accreditations & Memberships
      • Approach and Culture
      • History
    • Careers with Cardonet
      • Why Cardonet for your Career?
      • Meet our Team
      • Job Entry Options
      • Current Job Vacancies
  • Contact

News

AI vs AI: When Criminals and Defenders Both Weaponize Machine Learning

by Sagi / Tuesday, 02 December 2025 / Published in Cyber Security
AI vs AI - When Cyber Criminals and Defenders both weaponize machine learning

Can you defend against an attack that rewrites itself faster than your security team can respond?

Cardonet has been looking after clients’ IT for more than 25 years. In that time the security threats have changed constantly, but this feels different. 

AI has moved from being a “nice to have” innovation to the engine behind a growing chunk of cybercrime and is now the main way defenders keep up. 

Recent analyses suggest roughly one in six cyber incidents now involve AI on the attacker’s side, especially in phishing, fraud and social engineering. UK fraud and cyber loss data shows AI‑assisted scams are costing businesses tens of millions of pounds through deepfake‑driven investment fraud and sophisticated social engineering.

If you own or run a business, this isn’t abstract. It’s about whether you can keep trading when a criminal uses your own data, your brand, even your voice against you, and whether your insurance will still pay out when that happens.

AI cybersecurity threat detection with machine learning defense systems protecting your business networks

How criminals are using AI today

Let’s start with how attackers are using AI right now, because this is what you’re up against.

1. Smarter, faster phishing

A few years ago, most phishing emails were full of spelling mistakes and odd phrasing. Today that’s the exception. Attackers feed public information about your business, your sector and your people into large language models and generate thousands of emails that read like a real supplier, a real colleague, a real customer.

  • Messages are tailored to specific roles and current projects.
  • Tone of voice can be tuned to match the organisation they’re impersonating.
  • Replies are handled automatically, so the conversation feels natural.

You don’t need a big criminal gang to do this anymore. One person with the right tools can run a campaign that looks and feels like it’s coming from a well‑resourced fraud operation.

2. Deepfakes and voice cloning

The same thing is happening with audio and video. Recent UK data puts losses to investment scams in the first half of 2025 at close to £100 million, with AI deepfakes playing a significant part in making those scams so convincing. Global analysis shows deepfake volumes and associated fraud attempts rising sharply over the last two years.

Patterns seen in real incidents include:

  • A “CEO” or “FD” on a video or Teams call approving an urgent payment.
  • A “supplier” on a video call asking to change bank details.
  • Voice messages that sound exactly like your senior people, asking for exceptions to normal controls.

Under pressure, people act. If your approval process assumes “I saw them / I heard them” equals “it’s safe”, you are now exposed.

3. Malware that learns as it goes

The third big shift is on the malware side. There are now malware families that use AI to change how they behave mid‑attack. Threat intelligence teams have documented samples that call out to AI services for new ways to obfuscate code, avoid detection or move sideways once they are inside a network.

In simple terms:

  • Traditional antivirus looks for known bad patterns.
  • AI‑driven malware keeps changing those patterns.
  • Once inside, it can adjust what it does in response to what your tools are blocking.

That’s why “we’ve got antivirus and a firewall” is no longer a serious answer. It’s like turning up to a Formula 1 race with a 1990s family saloon and hoping for the best.

When AI is inside your own tools

There’s a second layer to this: attackers are starting to go after the AI models inside your own tools.

Adversarial machine learning is a set of techniques where attackers deliberately craft inputs that cause AI systems to make the wrong decision – classifying something malicious as safe, or flagging something harmless as a threat. That can mean:

  • Emails tweaked just enough to slip past AI spam filters.
  • Network behaviours shaped to look “normal” to an anomaly detector.
  • Poisoned data slipped into training sets so future models are blind to certain patterns.

Most mid‑market businesses don’t have data science teams to reverse‑engineer why a model made a bad call. They only see the outcome: “we had the tech, and it still got through.”

That doesn’t mean AI in security is a mistake. It means you have to treat your own AI stack as something that needs securing in its own right, not just a magic box that “does security”.

Your insurance costs are already feeling this

Now to the part that directly hits your P&L: cyber insurance.

The UK numbers are stark. Trade body data shows cyber insurance payouts for UK businesses reached around £197 million in 2024 – roughly THREE-TIMES the level of the previous year – with malware and ransomware accounting for just over half of all claims. Put simply, more attacks, more damage, bigger cheques.

Premiums and deductibles

In some quarters you may see average cyber premiums dip slightly where competition heats up, but the long‑term direction is obvious: up. As expected losses increase, insurers price that in. Expect:

  • Higher base premiums at renewal over the medium term.
  • Larger deductibles before the policy responds.
  • Tighter sub‑limits for ransomware, funds transfer fraud and business interruption.
  • Tighter minimum security requirements

Underwriters now routinely insist on a baseline of controls before they will quote.

Common requirements include properly deployed endpoint detection and response, multi‑factor authentication on email and critical systems, and privileged access management for admin accounts. Questions around fraud controls, call‑back procedures and how you verify high‑risk changes are also becoming normal, particularly given the rise of deepfake‑enabled scams.

If you can’t show that you’ve done the basics, you may find:

  • You can’t get cover at all at the limit you want.
  • You get cover, but claims are challenged if something goes wrong.
  • You are pushed into a more expensive, “high‑risk” bracket.
  • Exclusions and grey areas around AI

There’s also a quieter shift happening in policy wording.

Some policies now include language around “AI‑driven” or “synthetic” attacks, and some legal commentators have flagged the risk of exclusions or narrow interpretations in those areas. Another subtle problem is where a policy defines a “hacker” as an individual person, when in reality the actor is a semi‑autonomous tool being steered by a person in the background.

In a major loss, that wording will get tested line by line. You don’t want to be having that argument for the first time after the event.

So the real question is this:

Given how prevalent and expensive AI‑driven cyber fraud has become – and with insurance payouts and scrutiny rising – how will you cope when your premiums go up and your cover gets tighter?

What good defensive AI looks like in practice

AI Attack Types vs Defenses

The good news is you can use AI defensively to close a lot of these gaps. But it has to be done in a structured way. In our work with clients, a realistic AI‑driven security stack usually has four layers.

1. Strong identity and access as the foundation

Before you even talk about AI, get identity right:

  • Multi‑factor authentication on email, VPN and all remote access.
  • Role‑based access so people only see what they need.
  • Extra checks for finance and admin accounts, including out‑of‑band verification.

This is what insurers expect now, and it’s the cheapest way to blunt a lot of AI‑powered phishing and deepfake attempts, because even if someone is fooled, the attacker still struggles to log in.

2. AI‑assisted detection and response

The second layer is where AI really starts earning its keep:

  • Analysing endpoint, identity, email and network telemetry in close to real time.
  • Spotting unusual patterns – unfamiliar logins, odd data movement, strange process activity.
  • Correlating events across systems so your team sees the whole story, not isolated alerts.

Used well, AI here gives your team a realistic chance of spotting and containing an attack before it becomes a full‑blown incident. Think of it as giving your analysts a power tool, not replacing them.

3. AI Security Posture Management (AI‑SPM)

If you’re using AI or machine learning internally – for customer scoring, operations, or internal automation – you now have to think about the security posture of those systems:

  • Where do these models live – on‑prem, in specific cloud services, or embedded in vendor products?
  • Who can change their configuration or training data?
  • How do you monitor them for misuse, drift or unexpected behaviour?

Specialist frameworks and tools have emerged to sit across this lifecycle and flag misconfigurations, exposed interfaces and unusual activity. Insurers and regulators are starting to ask about this, particularly in sectors where AI models touch sensitive data.

4. People, process and testing

None of this works without people and process:

  • Regular briefings on AI‑enhanced phishing, deepfake fraud and new scam patterns.
  • Simple, enforced rules for verifying high‑risk requests (bank changes, large transfers, credential resets).
  • Periodic testing: simulated phishing, table‑top exercises and, where appropriate, red‑team scenarios that include AI‑based attack methods.

If your people know that “I saw them on a call” is not enough to bypass controls, you drastically reduce the odds of a single deepfake ruining your month.

Where the UK government and regulators are heading

It’s not just insurers and criminals paying attention.

The UK government’s own assessment of generative AI is clear: across the current period to 2025, AI is expected to amplify existing cyber risks rather than create entirely new categories, but it will significantly increase the speed and scale of some attacks. That’s already visible in the volume of phishing, the pace of fraud campaigns, and the way exploits are packaged and industrialised.

In response, the proposed Cyber Security and Resilience Bill is designed to tighten expectations on operators of essential services and important digital providers, and to put more onus on boards to understand and manage cyber risk, including AI‑related risk. At the same time, the NCSC’s latest reviews talk about record numbers of nationally significant incidents with ransomware and state‑linked activity heavily involved.

For a mid‑market business, the message is straightforward: the environment you operate in is being treated as structurally higher‑risk. Your customers, insurers, regulators and suppliers will all gradually increase what they expect from you.

What this means for you, practically

So what should you actually do?

If we were sitting down together, these are the questions I’d be asking.

  1. Where would an AI‑enhanced attack hurt you most?
  2. Payments and finance?
  3. Customer data and trust?
  4. Operational technology and uptime?

You don’t have to fix everything at once. You do need a clear view of your critical points of failure so investment goes where it matters most.

  • What are you already doing that just needs tightening?

Most organisations already have some building blocks: MFA on email, a modern endpoint solution, some level of awareness training. The quickest wins usually come from:

  • Turning “optional” security features on and enforcing them.
  • Closing the gap between policy and what really happens under pressure.
  • Improving monitoring so you see issues earlier and in context.
  • How can AI support your existing team, not replace it?

If you’re like most of our clients, the question isn’t “how do we add more tools?” but “how do we give this team better signal and less noise?” That’s where AI‑backed detection, correlation and sensible automation add real value.

  • How can I minimise insurance costs/maximise coverage down the line?

Before renewal, it’s worth sitting down with your broker and asking three direct questions:

  • Which controls will underwriters expect to see this year?
  • Where are the likely exclusions and grey areas around AI and fraud?
  • What evidence will we need to present if we make a claim?

Then work backwards. That conversation should drive a lot of your 2025–26 cyber and AI security roadmap.

AI Cyber Attacks - What you should do next.

How Cardonet can help

This is where our team comes in.

We work with businesses across sectors – professional services, hospitality, creative industries, education and more – who are all asking the same thing: “How do we keep people productive and the business moving, without leaving the door wide open to AI‑driven attacks?”

In practical terms, that usually means:

  • Assessing your current security posture with AI in mind: where you’re strong, where you’re exposed, and what your insurers and regulators will see.
  • Designing a realistic roadmap: which controls to implement in which order, aligned to your budget, risk and sector.
  • Implementing the right mix of identity, endpoint, network, cloud and AI‑focused controls – making sure they genuinely work together.
  • Training your people so they know how to respond to AI‑enabled threats, not just old‑fashioned phishing.
  • Helping you evidence all of this to boards, auditors, insurers and customers.

You don’t need a Hollywood‑style “AI cyber lab”. You need a clear view of your risks, the right foundations in place, and a partner who will tell you plainly where you stand and what to do about it.

If you’d like that level of clarity, talk to us. We’ll walk you through where AI changes your threat profile, what it means for your insurance, and how to get from where you are now to a position where you can say: “We’re not bulletproof, but we’re not a soft target either.”


FAQs

How worried should I really be about AI‑driven attacks if I’m an SME, not a big bank?
You should be concerned, but not paralysed. The data shows AI isn’t just being used against big enterprises – smaller firms are being hit with AI‑assisted phishing, fraud and account takeover because criminals know mid‑market controls are often weaker. The good news is that a handful of well‑chosen controls – strong identity, basic AI‑assisted detection, and tighter processes around payments – removes a lot of easy opportunities and makes you a harder target than the business down the road.

What are the first practical controls I should put in place for AI‑enhanced phishing and deepfakes?
Start with three basics: multi‑factor authentication on email and remote access, clear verification rules for money movements and bank detail changes, and regular awareness sessions that specifically cover deepfakes, voice cloning and realistic AI‑generated emails. Then add monitoring that can spot unusual logins and data movement, so you’re not relying on someone “having a bad feeling” as your main line of defence.

How is AI changing what insurers expect from me at renewal – and what happens if I fall short?
Insurers are seeing claims rise sharply and are reacting by asking harder questions about your controls, including how you manage phishing, ransomware and fraud risks that are now AI‑enabled. If you can’t show basics like MFA, endpoint protection, privileged access controls and sensible fraud procedures, you’ll either pay more, get lower limits, or find claims scrutinized much more aggressively when something goes wrong.

Can AI‑powered security tools actually help a small or mid‑sized team, or will they just create more noise?
Used badly, they absolutely can create more noise. Used properly, they reduce noise by correlating events across email, identity, endpoints and cloud services so your team sees a smaller number of better‑quality alerts. The aim is not to replace your people but to give them better signal so they can focus on real incidents instead of chasing hundreds of low‑value warnings.

If I suspect an AI‑driven attack – a deepfake call, odd payment request or strange system behaviour – what should my team do in the first hour?
First, don’t panic and don’t act alone. Pause any requested payments or changes, verify the request using a separate, trusted channel, and capture evidence (emails, call logs, screenshots) before anything is deleted. In parallel, isolate any affected accounts or devices, reset credentials where necessary, and escalate to your IT/security partner so they can review logs and contain any intrusion before it spreads.

Share this on:

  • LinkedIn
  • Twitter
  • Facebook
Tagged under: AI Cyber Security, AI Threat Detection, Cyber Insurance Costs, Cyber Security

About Sagi

What you can read next

SAT
How to Build a Successful Security Awareness Training Programme
how to spot a phishing scam
How to Recognise a Phishing Scam
how to prevent ransomware attacks
How to Prevent Ransomware Attacks

You must be logged in to post a comment.

Featured Posts

  • Restaurant POS Security-breach PCI compliance guide

    The POS Cyber Security Breach That Closed 300 Restaurants: Lessons for Hospitality 

  • Access Control Crisis and Managing Identity

    The Access Control Crisis: Managing Identity When Teams Double Every Quarter

  • Windows 10 End of Support More Downtime

    Windows 10 Has Reached End of Support…Expect More Downtime 

  • windows 10 end of life is here

    Windows 10 End of Life Is Here

Archives

  • December 2025
  • November 2025
  • October 2025
  • September 2025
  • August 2025
  • July 2025
  • April 2025
  • June 2024
  • April 2024
  • February 2024
  • January 2024
  • October 2023
  • September 2023
  • March 2023
  • February 2023
  • January 2023
  • December 2022
  • November 2022
  • October 2022
  • September 2022
  • August 2022
  • July 2022
  • June 2022
  • May 2022
  • April 2022
  • March 2022
  • February 2022
  • January 2022
  • December 2021
  • November 2021
  • October 2021
  • September 2021
  • August 2021
  • July 2021
  • June 2021
  • May 2021
  • April 2021
  • March 2021
  • February 2021
  • January 2021
  • December 2020
  • November 2020
  • October 2020
  • September 2020
  • August 2020
  • July 2020
  • June 2020
  • May 2020
  • April 2020
  • March 2020
  • February 2020
  • January 2020
  • November 2019
  • October 2019
  • September 2019
  • August 2019
  • July 2019
  • June 2019
  • May 2019
  • April 2019
  • March 2019
  • February 2019
  • January 2019
  • December 2018
  • November 2018
  • October 2018
  • September 2018
  • August 2018
  • July 2018
  • June 2018
  • May 2018
  • March 2018
  • February 2018
  • January 2018
  • December 2017
  • November 2017
  • October 2017

Categories

  • Bam's Blog
  • Customers
  • Cyber Security
  • Events
  • GDPR
  • Guidance
  • IT Consultancy
  • IT Support
  • Managed IT
  • Press Release
  • Recruitment
  • Team
  • Uncategorised
  • USA
  • What is

Tags

Business Business Continuity Christmas Party Cloud Computing Cloud Hosting Compliance coronavirus Covid 19 Cyber Awareness cyber crime Cyber Risk Cyber Security Cyber Threat Data Backups Disaster Recovery GDPR Halloween HOSPACE Hospitality Hotel Hotel IT Services Hotel IT Solutions Hotel IT Support Hotels Hotel Technology IT infrastructure IT Services IT Support Microsoft Microsoft365 Migration Network Security Outsourced IT Support Outsourcing IT Pancake Pancake Day Phishing Scams Remote Working Security Software Team Team Event Windows 10 Windows 10 End of Life Windows 11

Cardonet Twitter

Could not authenticate you.
TOP

We will help you overcome your technology challenges

Call us on +1 323 984 8908, email us at or fill out the following form to start the conversation.

",

For further information on how we process your data, please refer to our Privacy Policy.

IT Solutions

  • IT Solutions by Industry
  • Business IT Challenges

IT Services

  • IT Support
  • IT Consultancy
  • Managed IT
  • Managed Cloud
  • Communication
  • Cyber Security

About

  • Why Cardonet
  • Meet our Team
  • News
  • Insight
  • Case Studies
  • Careers

Contact

  • +44 207 837 2444
  • +1 323 984 8908
  • Change Region
Cardonet 26 years proudly supporting our customer
  •  
  •  
  • 750 N. San Vicente Blvd, Los Angeles, CA 90069
Cardonet IT Support and IT Services
Change Region
  • United Kingdom and Europe
  • United States and International

© 1999 - 2023 All rights reserved.

  • Sitemap
  • Terms and Conditions
  • Privacy Policy
  • GDPR
  • Accessibility Statement
  • Corporate Social Responsibility
  • Environmental Policy
Contact TOP
Cardonet
Cardonet Consultancy Limited 7 Stean Street London, Greater London E8 4ED
London Map +442030342244
Cardonet US Inc 750 N. San Vicente Blvd, West Hollywood Los Angeles, California 90069
Los Angeles Map +13239848908
Home Cardonet IT Support Logo