How Cyber Criminals are Using AI (and How We can All Stay Safer)

Artificial intelligence technology can do a lot of really fascinating and useful things. But unfortunately, criminals are discovering that they can use AI to quickly and easily create targeted attacks on both individuals and businesses. And these aren’t always full-scale assaults, either. More often, they’re steady “micro-attacks” that slip under the radar. This means that the relationship between AI and cybersecurity is more important than ever.
See AI Doomsday vs. A Very Bad Day with Dr. Robert Blumofe for a complete transcript of the Easy Prey podcast episode.
Dr. Robert “Bobby” Blumofe has a PhD in computer science from MIT and is the Chief Technology Officer (CTO) at Akami Technologies. Akami is the company that invented the content delivery network, which is a fundamental part of how the internet operates today. They still work on that, as well as cybersecurity and cloud computing. As the CTO, Bobby works in all three areas.
He has been at Akami for twenty-five years. Tom Leighton, the founder and CEO, had been one of his professors when he was a graduate student at MIT. Several of his friends from MIT also joined the company. Bobby knew it was a smart, motivated, exciting group of people there at what would become Akami. He didn’t know exactly what the company did, but he figured if he stayed with that group, something good would happen. And he was right. If you surround yourself with great people you enjoy working with, something good will come out of it.
Akami, Cybersecurity, and AI
Akami actually started in cybersecurity based on customer requests. Customers recognized that because they helped deliver content, they could see traffic going to and from their websites. Attacks were getting more frequent, and the customers realized that they could see and potentially block them. So the company started doing bespoke security work for a handful of customers. About ten years ago, they launched cybersecurity as part of their business. They started with DDoS protection and firewalls and expanded from there. More recently, they’ve started doing zero trust security and API security.
The dawn of deep learning was in the early 2010s. Since then, advancements in AI have been remarkable. Akami is a major user of AI because it’s the perfect tool for classifying things like traffic. They use it in their security tools to determine what’s normal, what’s malicious, what’s humans and what’s bots, and more. Now they’re also doing more generative AI and large language models to make it easier to set up security products, get insights into threats, and label assets in an environment.
Bobby is surprised how quickly AI has gone from something that didn’t affect most enterprises to a must-have technology. And the real value in AI for cybersecurity or anything else is the quality and quantity of the data. There’s very little differentiation between the different models available. But if you have a lot of high-quality data, you can train a fantastic AI. That’s one advantage that Akami has – they see so much internet traffic on a daily basis they can train AI models really well.
Criminals are Using AI
We’re seeing the very early stages of criminals using AI. And that’s because we’ve really seen an evolution in cyber crime. It’s arguably within the last five years that we’ve seen the emergence of the cybercriminal who’s in it for the money. Not too long ago, except for nation-states, the biggest threat actors were hacktivists – people and organizations hacking to make a point or a political statement. It’s only recently we’ve started to see the dominant cybercriminals be in it just for the money. But that’s brought a new level of competence in the criminals and viciousness in their attacks.
We’re moving into a new era of cyber-criminality with AI.
Dr. Robert Blumofe
Ransomware is an easy way to turn cyber activity into money, which is why we’ve seen the rise of ransomware and related attacks. Some cybercriminals have already discovered the potential of AI to defeat cybersecurity measures and help them quickly and easily create a huge volume of attacks. But in the next five years, we’re going to see even more of them adopting it. Bobby jokes that if you’re a cybercriminal, the greatest day of your professional life was November 30, 2022 – the day ChatGPT was announced. ChatGPT may not be their weapon of choice, but it opened up a whole new class of tools that can do amazing things for cybercriminals.
Why AI is Such a Good Tool for Crime
AI is essentially a super-weapon if you’re a criminal.
Dr. Robert Blumofe
AI is a great tool to help cybercriminals create fake content, mimic people, create misinformation, and launch spear phishing and social engineering attacks. And worse, it can help them do it all at scale. Spear phishing has always been effective. But historically it required a lot of work and research to determine the targets and craft the perfect custom lures.

With AI, now we’re in a world were spear phishing attacks and other attacks can be created en masse. It’s no longer hard work to create one detailed attack. Now they can create millions of micro-attacks. And they don’t need to rely on their one detailed attack to work. If a few, or even most, of their micro-attacks fail, that’s fine – they have millions of other ones, and some are bound to succeed.
It’s a Numbers Game
Everyone’s seen the headlines where someone falls for a deepfake asking them to do something. The voice on the phone sounds like their boss or the face on the video call looks like their boss, and they end up doing something they shouldn’t have done. But these attacks don’t have to be high-profile. For the criminals, it can be millions of small attacks, and all it takes is a handful of them to succeed to make it worth the criminal’s time.
It doesn’t have to be high profile attacks. It can be millions and millions of very small attacks. All it takes is for a handful of them to be successful.
Dr. Robert Blumofe
We’re only seeing the beginning. And a lot of what we will be seeing is small activity. It’s also moving from the enterprise to the social level. Cybercriminals can use the same techniques for misinformation. Any one campaign might have no impact. But if you do enough of them, each with a small impact, they can cause a dramatic shift. And these millions of micro-attacks aren’t created by a group of criminals behind keyboards. ChatGPT can generate a thousand posts on a misinformation topic in mere seconds.
Cybersecurity Against AI Misinformation
There’s no easy social-level cybersecurity solution to protect against AI misinformation attacks. Bobby believes most technology and social media companies want to do the right thing. But with AI making it so easy to scale, they have too many moles to whack. As much as they want to do the right thing, it’s hard to defend against the sheer quantity that cybercriminals can produce with AI.
It’s on us as people who read, watch, and listen to know that we can’t always be sure where something is coming from. Take care and be on guard. Can you determine if this message is coming from a reputable source? Does this phone call or video have any authentication on the other end? How do you know that voice that sounds like your son is really your son?
Education really makes a difference here. AI is everywhere, and our personal cybersecurity depends on some level of AI literacy. Arthur C. Clarke said that any sufficiently advanced technology is indistinguishable from magic, and that’s a problem when it’s a technology that’s affecting all of our lives. Banning it isn’t the solution. We need to go under the magic and reveal the secrets of how it actually works.
People need to have some idea of what AI is, what it’s capable of, what it’s not capable of, and therefore be informed consumers.
Dr. Robert Blumofe
The Problems with AI
Any AI is only as good as its training data. And an AI trained on the entire internet may not be as good as you think. If someone writes a political fanfiction on their website and the trainer doesn’t know it’s fanfiction, the AI could think it’s real and state it as fact. It becomes a case of “It must be true, I saw it on the internet.”
AI is a great technology, but it’s also been misused and overhyped. Bobby struggles with this because he loves AI. But people keep applying it blindly, thinking it can solve every problem. But that’s not the way to think about it. He hopes that over the next few years, the hype will calm down to somewhere reasonable and we will put guardrails and protections in place to use it the way it should be used. And there should always be a human in the middle. Most of us are familiar with the term AI hallucinations now – it often gives misinformation with great confidence, and will back it up by support info that’s also wrong. You can use the tech, but make sure to check the output.
How People Misuse AI
The biggest misuse of AI is in situations where AI may be the right solution, but a deep learning model would be better than a large language model (LLM). LLMs are trained on a huge amount of data. Some is to help it form sentences and paragraphs that make sense. Some of it is massive amounts of “knowledge.” It’s not knowledge like a human would have – it’s more like a huge database of connections between words.
Because of their training, LLMs know every movie ever made, everyone who starred in them, every president, actor, and general ever recorded, every car to be manufactured, and more. That’s party of why they’re so big – they encode all the “knowledge” on the internet. This makes them very useful if you want a general answer, similar to how we use Google. But if you’re trying to solve a particular problem, such as an insurance company trying to figure out a problem in actuarial data, do you really need an LLM that knows the full cast of Mork & Mindy to solve that?
It’s using megawatts to solve a problem that can be solved for milliwats. Not only does it cost a lot, you’re throwing a lot of energy into solving a problem that could be solved with a much smaller model. A pile driver can hammer anything, but most often you’re hammering in small nails that could be done with a regular hammer. Using a pile driver for all those tiny nails is a huge waste of energy and money.
How AI can Help in Cybersecurity
If AI is a super-weapon for criminals, it seems reasonable that AI can also be a super-weapon for cybersecurity. And it is an important weapon for cybersecurity defense. But there’s also an asymmetry in AI. Right now, it’s helping the criminals more than it’s helping the defenders.
At least as things stand today, the technology favors the attacker.
Dr. Robert Blumofe
AI is an important part of cybersecurity defense. But it’s not a silver bullet. Bobby wouldn’t be surprised to see AI snake oil at some point. To most people, AI is akin to magic. So unscrupulous salespeople might make claims about new AI products solving all your cybersecurity problems, or even other types of problems. Be wary of any claims that AI will solve all of any problem.

Ultimately, when it comes to defending against attacks, it’s important to go back to the basics and do them really well. Strong identity, strong authentication, multi-factor. Zero trust is important because in many ways it’s about visibility. Cybersecurity is often a game of visibility, with the goal to increase your visibility and deny it to attackers. They can’t attack what they can’t see.
AI Doomsday Probably Isn’t Coming
Some well-known people are claiming that AI could bring about a doomsday scenario. But Bobby thinks that’s way to sci-fi. For one thing, it’s giving too much credit to LLMs and other forms of AI. If you think about what an AI is doing in a doomsday scenario, or even something like the Terminator movies, it requires a lot of planning. And LLMs are notoriously bad at planning. They can often give answers that look like planning, but it’s just like they can often give the right answers to math problems – they can’t plan or do match but they can make it look like they can.
Bobby doesn’t see how an AI is going to be able to create a Terminator-like situation without a lot of planning. And we’re pretty far from an AI that can do that. But he does worry that we’re paying too much attention to doomsday scenarios and not enough attention to very bad day scenarios. We’re living in a world with micro-attacks happening all the time. We’re under attack almost constantly, and we’re going to see those attacks become even more common and more successful with AI. With so much attention on doomsday, we don’t have the attention available to focus on these other attacks.
Protect Yourself from A Very Bad Day
You don’t have to be a cybersecurity genius to protect yourself from many of the AI-based threats out there. A top thing is to be aware. Phone calls and video calls are pretty easy to spoof. If someone calls you, question what’s being asked. If it’s just a benign conversation, you don’t need to jump through hoops to verify anything. But if it’s your son claiming he’s in jail and you need to wire $10,000 to this account for bail, it’s time to do some verification.
There are a number of things you can do. You can hang up the call and call them back on a number you know to make sure it’s really them. You could ask secret questions. A lot of our information is publicly available, but there are also things you could ask your son this in this scenario that a hacker using AI wouldn’t be able to get off the internet. This could give an indicator that you’re dealing with the right person. Some people even set up code words with their family members to easily determine if they’re talking to the real person.
In businesses, make sure there are appropriate processes in place so a single phone call can’t trigger a funds transfer, configuration change, or anything else damaging. You want to make sure the only way these changes can happen is through a strongly-authenticated mechanism. Many enterprises have these processes in place already around their finances. But they need to be in place around everything critical, not just accounting and payments.
The Importance of Cybersecurity Education in a World of AI Threats
Most enterprises do phishing training for their employees. They provide tips and red flags to spot phishing lures and fakes. That training is worth doing. But inside the enterprise and outside, it’s important to expand that cybersecurity education. With AI, those red flags might not be there. You might not be able to tell a phishing message from a real one because there may not be any warning signs.
Even if those telltale signs aren’t there, even if it’s perfect, it may still be fake.
Dr. Robert Blumofe
A lot of sales and marketing is about reducing friction. But friction isn’t always bad. In fact, it can be a good thing – you couldn’t walk without it, and you need it in certain areas. We don’t want to put friction where it isn’t needed. There’s no need to go through a five-step verification process when a friend calls you for a casual chat. But when it involves money, business changes, or sensitive info, it’s time for friction. It’s necessary.
Everyone needs education on cybersecurity in an AI world. We all need to know how it works and what it can and can’t do so we don’t get tricked by false claims. And we need to know the kind of threats it produces and take steps to defend ourselves even if we can’t spot those threats.
Connect with Dr. Robert Blumofe on Twitter @robertblumofe or on LinkedIn. He tries to post regularly in both places.
Related Articles
- All
- Easy Prey Podcast
- General Topics
- Home Computing
- IP Addresses
- Networking Basics: Learn How Networks Work
- Online Privacy
- Online Safety
- Uncategorized
Fraud Prevention Advice for Everyone
Being a victim of fraud is, unfortunately, common. And it’s often underreported because victims are ashamed and…
[Read More]Best VPNs for Pokémon Go in 2024: Unlock New Regions
Pokémon Go enthusiasts come in every shape and size, from every walk of life, and from all…
[Read More]Is There a Warrant for Your Arrest? It’s More Likely You’re a Target of This Scam
If, like most people, you’re a law-abiding citizen, getting a call from law enforcement saying you’re in…
[Read More]The Hacker Mindset is a Benefit for Cybersecurity
When most people think of cybersecurity, they think of trying to protect digital assets from cybercriminals and…
[Read More]Scammers Want You to Buy Gold Bars – Here’s Why
Someone has just stolen your box full of gold, which contained your entire fortune. It sounds liek…
[Read More]Personal Finance Tips to Manage Your Money and Avoid Mistakes
No matter who you are or how much money you make, financial literacy is important. Knowing how…
[Read More]