
It’s a familiar scene playing out in businesses every day: an employee gets an email that looks like it’s from a trusted colleague and follows the instructions without a second thought.
Maybe they click a link, download a file or send a quick reply — all routine tasks, until they unknowingly open the door to a cyberattack.
With 95% of cybersecurity breaches caused by human error and the average cost of a breach now tipping US$4 million, it’s a simple, and common, mistake — with big consequences.
The average cost of a data breach in 2024 – a 10% increase over 2023 and the highest average on record. (IBM)
And it’s not just small businesses feeling the heat.
Just this month, cyberattacks brought down the systems of three major UK retailers within days of each other – a stark reminder that organisations of all sizes need to be prepared.
For retail giant M&S, the attack put a stop to all online orders via its app and website and wiped £1bn off its value. Forecasts suggest it could take months for things to return to normal.
Days after the M&S attack, a developing cyber incident at Co-op forced the UK retailer to pull the plug on some of its IT systems to contain the attack. A day later, luxury department store Harrods was forced to restrict internet access at its sites following an attempt to gain access to its systems.
The number of major ransomware attacks every day in 2004, up from just five a year in 2011. (New York Times)
But for many, the threat isn’t a one-off — it’s relentless.
Amazon.com fends off over 1 billion cyberattacks every day, which breaks down to more than 11,500 every second.
“The speed and simplicity of accessing large language models and AI is unleashing an unprecedented era of cyber threats – and cybercriminals are only getting smarter,” says Vladimir Vasilev, digital lead at Baker Tilly (Dominican Republic).
But the good news? Defences are getting smarter too.
"From intelligent threat detection to adaptive email filters, the most effective cybersecurity tools today are powered by AI that learns, evolves and responds in real time,” says Mr Vasilev.
“The future of security isn’t human vs. AI — it’s AI vs. AI.”
Bad guy
Online scams used to be clumsy and easy to spot — but generative AI has changed that, warns Mr Vasilev.
"AI has supercharged social engineering. We’re talking about emails that perfectly mimic someone you trust, systems that scan for vulnerabilities and deepfakes that can convincingly manipulate both sight and sound. These aren’t just theoretical threats — they’re making it dramatically easier to trick people into giving up sensitive data or access.”
According to Mr Vasilev, platforms like ChatGPT have transformed the landscape with their ability to generate fluent, human-like text at speed.
“In the wrong hands, that means more believable scams, more viral fake news and even voice clones that can fool the sharpest eye or ear.”
And looking ahead, he warns, the real tipping point could be autonomous AI.
“Once machines start making decisions independently, the risks escalate significantly. These include acting without human oversight, being vulnerable to malicious hacking, making biased or flawed decisions, creating uncertainty around accountability when errors occur and exposing personal data to greater risk.”
AI is a double-edged sword — the same tools enabling cybercriminals are also being used to defend against them.
“It’s not simply about the technology itself — it’s how, and by whom, it’s used.”
The average number of days to identify and contain breaches involving stolen credentials. (IBM)
Good guy
Organisations that extensively use AI and automation in cybersecurity save an average of US$2.22 million compared to those that don’t. So it’s no surprise over two-thirds of companies now rely on the technology to spot and stop cyberattacks.
“AI simply outperforms humans when it comes to handling and analysing massive volumes of data,” explains Mr Vasilev.
“And that’s exactly what’s needed to keep up with today’s cyber threats.”
These tools don’t sleep.
They scan for anomalies, analyse behaviour patterns and respond in real time — often with minimal human intervention.
By learning how users typically behave, AI can quickly detect the unusual, locking down accounts or alerting administrators before real damage is done. Machine learning is powering smarter anti-virus software, sharper threat intelligence and faster, more adaptive defences.
“AI is fast becoming cybersecurity’s most powerful ally,” says Mr Vasilev.
“In the future, it may be able to predict threats, automatically patch vulnerabilities and even develop new forms of defence we haven’t imagined yet.”
But as the same technology that strengthens defences also empowers attackers, Mr Vasilev warns that organisations must evolve alongside it — training teams to spot AI-powered threats, securing their own AI systems and using AI to defend against AI-driven attacks.
But there’s a major challenge: talent.
“Only about 12% of cybersecurity professionals have formal training in AI or machine learning,” notes Mr Vasilev.
“That’s a serious gap, because to succeed in cybersecurity today, you can’t just be a security expert — you need to be a junior AI expert, too. You need to understand different types of machine learning, know how AI tools work (and where they fall short) and stay up to date on the growing risks and rules around generative AI. It's not optional anymore — it's the baseline.”
And the pressure is mounting. Amid the global shortage of cybersecurity talent, many small businesses are also struggling to afford advanced AI-driven security tools. At the same time, an ever-evolving regulatory landscape demands continual upskilling, placing additional pressure on already stretched resources.
“It’s like running on a treadmill that never stops speeding up,” says Mr Vasilev. “Exhausting, but essential.”
A balancing act
For most businesses, data is their crown jewel — and when using AI, protecting it must be a top priority, explains Mr Vasilev.
“That means setting clear boundaries: confidential information shouldn’t be fed into AI tools and employees need guidance on what’s safe to share.
“It is crucial to raise awareness among your employees about the various methods cybercriminals use to gain access to sensitive information. It's equally important to understand how models like ChatGPT function and how to interact with them responsibly to avoid issues such as AI poisoning.”
AI is powerful — it can write, code, analyse and automate — and is helping innovate in industries from education to healthcare to finance. But it’s not infallible, says Mr Vasilev.
“Think of it less as a genius and more as a brilliant assistant: incredibly useful but requiring oversight.
“Used wisely, it can unlock huge potential – from helping us work faster and smarter to coming up with new ideas. Used carelessly, it can cause serious risks.
“Ultimately, the more we use AI thoughtfully, the better we’ll get at leveraging its strengths while avoiding its weaknesses.”
Share of breaches that involved shadow data, showing the proliferation of data is making it harder to track and safeguard. (IBM)
The age of accountability
With over 170 data protection regulations proposed or enacted over the past two years, it’s clear that the age of AI accountability has begun — and the rules are tightening.
“Regulatory bodies worldwide are rolling out new frameworks and standards to rein in misuse and promote responsible AI,” says Mr Vasilev.
The EU’s AI Act — the first of its kind — took effect in August 2024 and will be fully enforced by 2026, targeting high-risk systems with stricter requirements.
In the US, the National Institute of Standards and Technology has developed a risk management framework to better manage AI associated risks to individuals and organisations.
“Currently, there is no overarching regulation governing how AI should be developed, but specific applications — particularly those that pose potential risks — are coming under increasing scrutiny,” notes Mr Vasilev.
“Platforms like TikTok and Meta are already adapting by deploying tools to detect and label AI-generated content, helping users identify when something isn’t real.”
Calculated intelligence
As AI reshapes both the tools of defence and the tactics of attack, awareness, responsibility and smart implementation are critical.
The organisations that invest in understanding and guiding AI use — across cybersecurity and beyond — will be best placed to harness its potential while staying protected against its risks.
“The future of AI in business isn’t just about innovation; it’s about trust, vigilance and getting the balance right,” says Mr Vasilev.
The growth in voice phishing attacks between the first and second half of 2024. (Crowdstrike)