Skip to content

AI & Cybersecurity: Should We Be Worried?

pexels-alex-knight-2599244

When you think of artificial intelligence (AI) your mind might rush to the paranoid fantasies of the dark future of The Terminator franchise or Hal 9000 in 2001: A Space Odyssey. When will super-smart machines destroy us all?  Countless movies have explored the perils of machines developing self-awareness. How safe is AI in reality, and how does it play into cybersecurity? 

We don’t need to be afraid of AI

First, there’s no need for paranoia about AI. Sure, there were those two Facebook negotiation bots that started communicating in a language only they could understand. That project was scrapped but it was because it was unsuccessful, not because the bots were conspiring to take over the world. AI is not remotely at the level where we should be afraid of it. 

Author and Researcher Janelle Shane assures the danger of AI lies more in human error. In her TED talk, she explained present-day AI is at the level of an earthworm or a honeybee. The real dangers are more tied to what the AI cannot understand. The danger isn’t humanity being overtaken by machine overlords but more AI doing exactly what humans ask it to do…literally.

AI needs to be directed. It can only operate based on the data sets it’s given and the parameters of the request. Let’s say you ask an AI to drive a car; it will drive a car. That doesn’t mean it will keep all the passengers alive or follow the rules of the road. Not unless it’s been properly “educated.” 

The dangers of AI at present are more that it functions like a genie. Like in stories, the wish you give a genie must be very specific — if not it can blow up in your face. So it’s way more likely that the AI would bungle something based on not getting adequate information. Using the driving example, if you don’t tell the AI to follow traffic laws, keep the passengers alive, and how the human body works, then it simply will take your request and run with it. “Drive the car” could mean driving a hundred miles per hour, driving through buildings to get to its destination faster, or mowing down people in an open-air market. The key to AI is giving it the correct requests and data sets.

How can AI help with cybersecurity?

AI can be highly beneficial to cybersecurity. AI works in a way humans simply cannot because it has literal insider knowledge of your system. It can more readily assess threats, vulnerabilities, and respond automatically. 

By its nature, AI will keep trying at something until it solves the problem and that approach would require less manpower. After all, if you need a designated person to assess a threat or deal with an issue, that can cause delays which can increase the damage of an attack. AI can readily deal with attacks in real-time before they escalate into issues that would signal for intervention. 

AI’s ability to notice anomalies can be vital. This can be used in penetration testing, identifying network vulnerabilities, anomalies in code, keyword matching, and scanning for suspicious behavior. The AI can log strange user behavior or scan emails for potential malware or phishing links

AI can also be used in antivirus software because it can battle bots in real-time. So it can function as an immune system against viruses. AI is able to also keep an eye on the evolution of threats, suspicious traffic, or abnormalities in the system that can be tied to nefarious activity. It can also keep an eye on your network and traffic in a way a human may not be able to. By noticing the subtlest abnormalities it can keep you more aware of any problematic or nefarious activity. 

What are the issues with AI in cybersecurity? 

There are a few challenges to using AI in cybersecurity. One, there are very few qualified individuals able to offer support. There’s a bit of a gap in the excitement to adopt AI and the people qualified to help support it. After all, programming the AI and properly setting it to the task is crucial. If not, you could have more problems than solutions. You need someone to monitor that the AI is actually behaving as it’s been asked to. 

While there aren’t tons of qualified AI specialists, the pool is growing and this is a potential area for those interested in careers in cybersecurity.

Can AI be hacked? 

One of the dangers of AI is that you can’t just set it and forget it. You need to have qualified people monitoring the progress and maintaining the data. If not, hackers can capitalize on the AI by corrupting the training procedures, or data sets. 

This training data can potentially be hacked which can drastically change what the AI is doing. Using the genie metaphor, a hacker can attack the data of the “wish” causing the AI to start doing something else like send out credit card information and avoid detection.

How can hackers use AI?

The biggest issue with AI in cybersecurity is that hackers can use AI, too. This creates a bit of a digital arms race as businesses are working to advance their own AI efforts while also shoring up their defenses. 

It’s easy enough for hackers to set their own AI to review user behavior, assess vulnerabilities, and learn the schedule for updates and patches. Hackers can set AI looking for entry points into networks, open data ports, or weak spots in code. We’ve all seen hackers in movies using gadgets and programs to crack passwords. AI can be used to do just that.

AI can also be used to craft more personal social engineering efforts. It can spend time personalizing spear-phishing emails, coded links, or personalized responses. It can even be used to make deep fake phone calls or mimic voices. It’s a lot easier to get you to download malware if you’re having a conversation with someone or “they” have spent months reviewing your conversations and habits. 

AI can also be used to give malware that little something extra. This could be malware that relentlessly attacks software patches, or malware that works secretly to collect information and then launch an intelligent attack. AI can even help make malware that mimics trusted software components. That means your software would function as normal until this malware component reveals itself. AI can also be set to develop more advanced ransomware. After all, AI can spend time behind the scenes collecting data and preparing for its inevitable attack.

Concerns over AI are natural. What’s more dangerous than AI “coming to life” like in some 1980s movies is the general lack of understanding about how it functions. Human error or underestimating the potential of AI are the bigger issues. This can put a business at a security disadvantage as hackers are quickly familiarizing themselves with the potential applications of AI. There’s no need for paranoia, simply more education and preparation.

Related Articles

All
  • All
  • Easy Prey Podcast
  • General Topics
  • Home Computing
  • IP Addresses
  • Networking
  • Online Privacy
  • Online Safety
Stuart Madnick has been in cybersecurity since 1974 and knows a lot about the costs of cyberattacks.

The Cost of Cyberattacks: Minimizing Risk, Minimizing Damage

Most of us view the internet as a useful and benign tool. But in many ways, it’s…

[Read More]

How to Keep Your YouTube from getting Demonetized

You finally did it–you hit all of the markers for acceptance in the YouTube Partner program, and…

[Read More]

How to Stay Out of Facebook Jail

Many of us have been there before–behind the proverbial bars of social media punishment. We’re left shocked…

[Read More]
Lisa Plaggemier's job is to promote cyber security awareness.

Cyber Security Awareness for Everyone

You can do anything on the internet – shop, bank, meet your future spouse, become famous, and…

[Read More]

Cyberbullying Prevention: What Parents Can Do

It’s very easy for anyone to create a fake online profile and say or do mean things…

[Read More]
Lost iPhone

Lost iPhone? If It’s Missing, Look Up to the Cloud for Help.

Here's an important piece of advice: You need to learn what Find My and iCloud.com can do...

[Read More]