AI Implementation Considerations for Safety and Security

Artificial intelligence has a huge variety of uses. This means many companies are considering, or have already started, AI implementation in their business. But AI’s uses also include threats and cyberattacks. It’s important to implement AI thoughtfully and carefully to best keep your systems secure.
See Safe AI Implementation with Aditya Sood for a complete transcript of the Easy Prey podcast episode.
Dr. Aditya Sood is the VP of Security, Engineering, and AI Strategy at Aryaka and has been in the security industry for the last seventeen years. He has a PhD from Michigan State University and has had a variety of positions in the security space, including Senior Director of Threat Research and Security Strategy for the Office of the CTO at F5, Director of Cloud Security for Symantec, and consulting for other companies and startups. His interest in security started in the early 2000s, when the Hacking Windows XP book came out. Since then, he’s seen the many different ways cybersecurity has evolved and is still learning and contributing to the success of the security community.
Why Proactive AI Safety is Important
AI has been in use for quite a while, but in a more constrained manner. Recent evolutions in the technology has brought much wider AI implementation and much more acceptance. When the cloud first came to exist, we developed new technologies based on that. The same thing is happening with AI. And with the integration of AI and cloud technology, the attack surface is expanding.
To use AI, you have to deploy AI workloads for training, pre-processing, inference, and more. For workloads to work effectively, you need AI pipelines and supply chains. And since AI can process petabytes of data, complexity increases. A proactive approach to cybersecurity requires factoring in these additional components to your threat model. But by factoring the changing attack surface into your threat model, you can ensure that you have a strong security profile that can handle risks and attacks against your AI.
At the end of the day, the attack surface is changing. When you have to do the threat modeling for all these components … it enhances the existing sphere.
Dr. Aditya Sood
Risks and Threats in AI Implementation
AI implementation can be done in two ways. You can get a pre-trained model or get a model that you have to train yourself. Some pre-trained models can be imported from online repositories. If there’s a backdoor already in that model, you inherit it. If you deploy it without validation and verification, that backdoor, or any other malicious code that might be in it, can let criminals steal your data, implement a ransomware attack, and more. Handling the supply chain around your AI implementation is one of the most important considerations.
When you deploy AI into your production environment, you also have to consider what risks it introduces. One of those is AI drift. An AI needs data to do its work effectively, and that data has to be updated at regular intervals. If the data gets stale, the AI doesn’t know how to react to new input or detect new patterns and can start to “drift” away from what it’s supposed to do. AI hallucinations often happen when the training data isn’t updated or when it’s biased towards certain concepts or components. Having quality training data is essential.
Additionally, attackers are always trying out new techniques. Inference attacks can help them figure out what data an AI has been trained on and what that means for how it works. Data poisoning happens when attackers find a way to put malicious data into the AI’s training set and train the model to do things you don’t want. And prompt injection, where attackers figure out how to bypass the AI’s safety mechanisms and do malicious things, is always a risk.
This is all an arms race … attackers are developing new techniques and procedures.
Dr. Aditya Sood
Weaponizing AI
AIs processing petabytes of data can work quickly and efficiently. This means attackers can use it to generate malicious code quickly. Sometimes it’s even possible to manipulate the AI into figuring out how to bypass its own safety guardrails. Techniques like forcing prompts can tell attackers a lot about how a specific AI implementation works and help them come up with ways to bypass safety restrictions. At the end of the day, there will always be new bypasses. But there will also be new security controls to deal with those bypasses, as well.
There will be bypasses, and there will be new security controls for that as well.
Dr. Aditya Sood
Security professionals can also use AI’s computing power, speed, and efficiency to work for better security, as well. Adversaries are going after AIs with attacks and nefarious motives. But security practitioners can use the same tools to save and secure AI models as well. It’s an arms race to see who’s going to go faster and who’s going to stay ahead.

Challenges in AI Implementation and Deployment
There’s a concept in AI called workload analysis. If you’re deploying a new AI-centric application or service, you have to tie it in with the workloads in your cloud or production environment. Companies have to understand the AI’s requirements, how it will interact with existing infrastructure, and how it can be integrated. AIs involve pre-processing workloads, inferencing workloads, and more. Does your environment have the bandwidth latency that it needs? You have to consider these things before the AI implementation.
Authorization boundaries are also important. Once you understand where the data is coming in and how the AI workloads process that data, authorization boundaries and security controls need set up and enforced. If you’re using third-party packages, pre-trained models, or something similar, validation and verification is crucial to ensure your infrastructure is secure. General visibility and observability are essential to make sure you are aware of what is happening in your network and systems regarding AI. Visibility and observability give you the ability to set better security controls and have the resilience to handle incidents.
Visibility leads to security. If you don’t see anything, how can you protect or secure that thing?
Dr. Aditya Sood
You also need to be aware of shadow AI, which are models running on the back end that you may or may not be aware of. Especially if you allow employees to bring their own devices, there may be AI-integrated apps doing things in the backend that could have access you didn’t want. It’s especially important to have your legal team involved, because data protection laws can have a big influence. If regulations like GDPR and CCPA apply to you, you have to make sure you’re handing data in a complaint way.
AI Implementation and Politics
The advent of DeepSeek has raised questions about nation-state AI implementation. The Chinese company DeepSeek develops LLM AI models and makes available in two ways – as open-source standard structures you can deploy in your own production environment, and also as a service that consumers can use.
If you’re implementing this AI in your business, the threat model changes. You can fine-tune the raw model and know the controls you need, but you’re not in control because the queries and data you’re sending are stored in a cloud that’s managed somewhere else. The problem is not when you have the model deployed in your environment. The problem is that you have the option to try it for free for fifteen days. You’re going to throw a lot of data into it in that time and it’s going to be sent somewhere else. This company will get a lot of information on your company and your users. And just like all AI models you get from somewhere else, you need to make sure you verify and validate so there isn’t any hidden malicious code.
If an AI model has a political slant to its answers, how is a consumer or a company supposed to know? This is a problem of black box AI models designed by enterprises and organizations. If there’s no transparency, it’s hard to establish trust. That’s also a big risk, too. Bias can cause AI hallucinations, and if you’re not aware of a bias in your AI implementation, it’s hard to correct it. One ideal solution is that people will ask and the AI creator will provide certified reports and compliance checks on the validation that their model has gone through. These reports could reduce transparency issues and bias. But right now, this is still a big challenge.
Society and AI Implementation
Society as a whole is more familiar with AI now. We’re approaching it with more caution and haven’t yet settled on how we’re going to interact with it. Right now, if you’re interacting with ChatGPT or a similar program and you ask it to write some malicious code for DNS tunneling, it won’t do that. But if you rewrite the prompt pretending to be an educator trying to teach your class about DNS security, it will generate that malicious code for you.
As people interact with AI more, their prompts will become more mature and they’ll see different results. In cybersecurity, exploration comes first – attackers have to discover the vulnerable point before they can exploit it. This will evolve over time as people interact more with AI. And education needs to go along with it. This is a new technology, and people need to understand what AI implementation means and its pros and cons. They don’t necessarily have to understand the nuts and bolts of how it functions, but they at least need to know how to use it safely.
[AI] is a new technology in this forest of technologies, and people just need to understand what exactly it means and what are the pros and cons.
Dr. Aditya Sood
As AI implementations and our use of AI get more mature, we’ll see new research, threats, and attacks,. And the threat model is different between types of AI. Once you can properly model what your attack surface and threats look like, you can build more security based on that. But you need a complete picture of your AI infrastructure first.
Dr. Aditya Sood’s book, Targeted Cyberattacks: Multi-Staged Attacks Driven by Exploits and Malware is available wherever books are sold. You can also connect with him on LinkedIn.
Related Articles
- All
- Easy Prey Podcast
- General Tech Topics, News & Emerging Trends
- Home Computing to Boost Online Performance & Security
- IP Addresses
- Networking Basics: Learn How Networks Work
- Online Privacy Topics to Stay Safe in a Risky World
- Online Safety
- Uncategorized
Cyber Warfare is the Future of Global Conflict
The future of war is digital. The importance of cyber defense can’t be overstated. We need insights…
[Read More]BNPL Fraud: What to Know if You Use These Apps
Buy now, pay later (BNPL) apps and services are getting more and more popular. They have plenty…
[Read More]DDoS Attack Strategies Explained
Distributed denial of service (DDoS) attacks can have huge impacts on companies and even average consumers. Though…
[Read More]What Goes Into Ranking the Top VPNs? Let’s Take a Look.
The lists of “the best VPNs” are pretty different, confusing everyday consumers looking for guidance. So, at...
[Read More]What Is a Brushing Scam?
What if a package you receive, which seems to come out of the blue, has your name...
[Read More]AI Implementation Considerations for Safety and Security
Artificial intelligence has a huge variety of uses. This means many companies are considering, or have already…
[Read More]