
Top stories

LegalNigeria to implement new tax laws from January 1 despite calls for delay, Tinubu says
Camillus Eboh 1 day


A “domain renewal” notice, complete with a logo, invoice number, and urgent call to action: “Your domain is about to expire — pay R99 to renew.”
At first glance, it seemed routine. But a closer look revealed the cracks:
It was a scam, but not a clumsy one. It was deliberate, structured, and carefully designed to look legitimate. And that’s exactly what makes today’s cyberattacks so dangerous.
The world of digital fraud has evolved beyond spammy subject lines and suspicious attachments. Welcome to the era of AI-powered scams, where algorithms craft messages more “human” than humans, and trust is the new currency that’s under attack.
The shift: From obvious scams to AI-enhanced precision deception
Just a few years ago, scam emails were easy to spot: bad grammar, broken logos, and unrealistic promises. But in 2025, artificial intelligence has rewritten the playbook.
Cybersecurity analysts at Securelist report that AI can now generate contextually relevant, grammatically perfect phishing emails in seconds, making them far harder to detect and far more likely to succeed.
The scam that reached our inbox wasn’t random. It was algorithmically designed to hit the right nerves: urgency, familiarity, and fear of disruption.
The global rise of AI-assisted attacks has created an entirely new category of cyber deception.
Here are some of the most sophisticated (and fast-growing) forms of digital fraud that’s shaping 2025’s threat landscape:
Using AI-generated voices, scammers can now replicate the tone and speech patterns of real people – even known colleagues or executives. Combined with spoofed caller IDs, these attacks sound authentic enough to convince employees to share passwords or authorise transfers. Recent reports from LinkedIn and SABI show an increase in voice-cloned calls specifically targeting financial departments and high-value accounts.
As UpGuard explains, attackers now “clone” legitimate emails, duplicating layout, sender name, and tone, but replacing attachments or links with malicious versions. When it lands in your inbox as a “follow-up,” it feels authentic because it is based on a real message or conversation.
As the Google Blog warns, scammers are leveraging the popularity of AI tools by creating fake versions of platforms like ChatGPT, Midjourney, and Gemini. Victims are tricked into downloading malicious apps or handing over their credentials on counterfeit login pages that look pixel-perfect.
AI-driven “prompt bombing” and social engineering are enabling scammers to manipulate users into approving fraudulent MFA requests. Securelist highlights a surge in multi-stage attacks where criminals exploit user fatigue or confusion to bypass MFA altogether.
No longer limited to rank-and-file employees, cybercriminals are increasingly targeting C-level executives, directors, and founders. These “whaling” attempts mimic high-value internal communications (think transfer requests or vendor approvals) using insider language and corporate branding.
In 2025, personal data isn’t the only target; voiceprints, signatures, and facial scans are now being stolen and sold on the dark web. Unlike passwords, biometric data can’t be reset, making these breaches permanent and far more dangerous.
Scammers are increasingly hiding behind trusted domains like Google Translate, Telegraph, and Pastebin to make malicious links appear legitimate. On Google Translate, for example, fake sites are “wrapped” in a Google URL, while Telegraph hosts cloned login pages that look official. Pastebin, often used by developers, is now repurposed to store stolen data or host malware links. By exploiting the credibility of these platforms, attackers bypass spam filters and user suspicion with alarming ease.
Every successful scam relies on the same three emotional triggers:
AI has made it possible to personalise these triggers to each recipient. By analysing your public LinkedIn posts, email tone, or even writing style, attackers can now craft bespoke communication that mirrors your daily interactions.
In our own case, the scammer used:
Scams succeed not because we are careless, but because they’re engineered to look like what we trust most, which is normality.
Defending against AI-enhanced scams requires moving beyond awareness to structured vigilance. Here’s how businesses can strengthen their digital resilience:
Every invoice, transfer request, or vendor update should go through an independent verification process via a known phone number or secure portal.
Enable DMARC, SPF, and DKIM to prevent domain spoofing. Use AI-based email filters that detect context anomalies, and not just known threats.
Cybersecurity isn’t a one-time training session. Create monthly awareness reminders, simulated phishing tests, and quick guides for new scam trends.
Trust nothing by default – even internal communication. Every request should be verified, authenticated, and traceable.
Implement “executive protection” protocols for senior staff who are prime targets for whaling and voice-cloning.
Regularly audit what personal or corporate data is publicly available online. AI can only exploit what it can access. Controlling exposure limits risk.
The line between legitimate and fraudulent communication is blurring. What used to be obvious tell-tale signs of a scam – dodgy email addresses, spelling errors, strange URLs – are no longer enough to judge credibility.
The scammers are evolving, and so must we.
At We Do Digital, our work depends on digital trust – from protecting client data to identifying malicious attempts before they reach your inbox. The best defence isn’t paranoia; it’s awareness, process, and proactive adaptation.
The scam we received wasn’t unique. It was simply a sign of the times. But it reminded us that vigilance, not fear, is what keeps digital ecosystems safe.
At We Do Digital, we don’t just optimise brands for visibility; we help protect their digital integrity too. Let’s make your online presence both powerful and safe. Get in touch.