Skip to content

AI Regulation and Why We Need Laws About Tech

Bruce Schneier talks about tech and AI regulation and why it's so important.

Very few people are ready and willing to say that what we need is more regulation. But in some areas, more regulation is really needed. Especially in areas like technology and AI, regulation is way behind the technology. We need updated legislation and policy, and we need more floor debates on these topics to move policy forward.


See Technology Regulation is Outdated with Bruce Schneier for a complete transcript of the Easy Prey podcast episode.

Bruce Schneier is an internationally-reknowned security technologist that The Economist called “the Security Guru.” He is the author of over a dozen books, including his latest, A Hacker’s Mind. He also teaches postgraduate public policy at the Harvard Kennedy School, which he describes as “teaching cryptography to students who deliberately didn’t take math as undergrads.” Bruce and his students are all interested in technology and its policy implications. Some of his students look at it through an arms race or national security lens, and some look at it through the lens of human rights. It’s clear that technology and AI are going to change a lot of things, but we don’t fully know when or how.

Regulation is Always Behind the Technology

Policy always seems to run ten to fifteen years behind technology. But this isn’t just in the fields of computer advancements or AI – regulation has similar lags in fields like pharmaceuticals, automotive, and aviation. People in technology have a story that legislators can’t regulate tech because they don’t understand it. But legislators regulate things all the time. Do they understand aircraft design or pharmacology? Probably not. But they still pass laws about it.

We in society have to figure out how to make laws in areas where the lawmakers don’t have expertise. Tech is one of them.

Bruce Schneier

Technology is a fast-moving field, more so than aviation or pharmaceuticals tend to be. But the notion that we can’t regulate tech is harmful. The lack of regulation is actually why things are bad. We just need more agile tools and more powers that work against each other. If there’s no checks on corporate power, you end up with monopolies and corporate dystopias, and society is worse for it. The idea that tech moves too fast to be regulated is a convenient excuse. But it allows potential harms to continue unchecked.

Companies Need Regulation

In the United States, we don’t pass laws that money doesn’t like, period. Money runs politics, which is why we’re in such bad shape. Europe has its own issues – they often overshoot the mark and over-regulate. But at least they’re trying. They pass comprehensive privacy laws like the EU AI Act and the Digital Markets Act. Their regulation issues fines that companies actually notice.

If you want to change corporate behavior, you have two options. One is to jail the executives. Bruce is in favor of that option. The other is to issue fines that are big enough to affect a company’s share price and big enough that they can’t ignore it or brush it off.

We’ve built a world where corporations have the same rights as people, but they’re immortal, sociopathic, single-purpose hive organizations. They’re not constrained by the psychological limitations of individuals, but we’ve given them most of the rights and responsibilities. It’s not working. We need to look at society from a security perspective. What are the incentives and motivations? What are the security controls, and how well do they work? We could talk about computer software the same way. But in this case we’re talking about the rules that run our economy instead of the rules that run our laptops.

We need tech and AI regulation for the good of society.

Technology, AI, and the Abuse of Power

We are already seeing abuses of power with AI. That’s because AI is a tool that makes the user more powerful. Will it be used to give the already-powerful more power, or will it empower less-powerful people and democratize power? That is the great question of AI regulation that we’re dealing with here.

AI is a power magnification tool. It makes the user more powerful. Will the uses further empower the already powerful, or will it somehow democratize power?

Bruce Schneier

Bruce’s latest book, A Hacker’s Mind, is about finding loopholes. This doesn’t just apply to computer code. You can think about finding loopholes in society, such as with the regulatory code or tax code. Regulations are just a set of algorithms, and they can have “bugs” and vulnerabilities. When the tax code has vulnerabilities, we call them tax loopholes. Exploits for those bugs we call tax-avoidance strategies, and we call the hackers who use those exploits accountants and tax attorneys.

If you find a new loophole or tax code vulnerability, permitted by the letter of the rules but not the intent, it’s a hack. It’s not cheating, just a way to bend the rules. If you or Bruce discovered one of these hacks, you might save a few thousand dollars on your taxes. If Goldman Sachs finds it, they’ll make millions. They have more raw power. AI can just ramp that up.

How AI Increases Power

Bruce wrote an essay in 2021 called “The Coming AI Hackers” where he imagines AI finding loopholes. There’s already a lot of research on AI finding source code vulnerabilities. It’s not good at it yet, but it’s going to get better. In cybersecurity, this is interesting because it benefits both attackers and defenders – attackers can find vulnerabilities to exploit and defenders can find vulnerabilities to fix. A future in which this AI vulnerability finder was built into the development process would be a big win for security.

But imagine that you could train AI to find vulnerabilities in the tax code. AI will do these human cognitive tasks differently. In some cases better, in some cases worse, but definitely different. A lot of systems are set up for humans doing things the way humans do. If a thousand vulnerabilities in the US tax code were all discovered at once, things would fall apart.

Tax law is harder than computer code because there’s more context. A lot of the best AI work in these systems happens with humans and AI working together. The AI combs the world’s tax codes and pops up something that seems interesting. The human looks at it and explains why it’s not actually interesting. Eventually, the AI gets better and comes up with things the humans can work with. It’s AI plus a skilled tax attorney, the same way AI assists good programmers. It’s a collaboration.

AI will be best used in collaboration with humans.

The Future of AI in Regulation and Policy

Bruce is currently writing a book on AI and democracy, so he’s thinking about the questions of AI, regulation, and policy. We’ve already had examples of AI writing legislation. AI is already pretty good at writing text, and a law is just a piece of text that we’ve voted to adopt. There’s a story about a Brazilian city legislator who wanted some legislation on water meters. He asked ChatGPT to write it, submitted it for a vote, and it was debated, voted on, and passed. If you think about it, that’s not that interesting of a story. AI didn’t pass the law. Humans passed it, AI just came up with the language.

We’re probably going to see more AI-assisted legislative writing. AI can wrote more complex laws than humans can, so we might see AI-assisted laws being more complex and detailed because AI can do all of that faster. It’s going to be human-reviewed, but it will also be AI-reviewed. Legislators can go through that loophole-finding process and ask the AI to find loopholes and unintended consequences for a regulation. AI can also insert those loopholes, which is good for lobbyists. Again, it’s a question of power – AI increases it, who is using it?

The different applications for AI in policy are coming from the bottom up. Legislators are saying that they need help drafting a particular bill, opening a new window with an AI tool, and getting that help. We don’t have to change regulations for that to happen.

Biased Doesn’t Mean Unusable

Courts are already using AI to screen potential jurors. It can look at who they are or what they’re likely to decide. Anytime there’s a human judgment involved, we will see AI assisting. Some of it will be good, some bad, and most in the middle. But humans can use it as just another input in their decision-making.

Anytime we’re going to have a human judgement, there will be AI assistance.

Bruce Schneier

Bruce promotes a public AI model, not a corporate one. In a corporate AI model, those decisions will be skewed towards corporations. And if the company doesn’t care to eliminate bias, the AI can be biased by race, gender, ethnicity, or all sorts of things. AI is going to advise Republicans with a different bias than an AI advising Democrats. And that’s not a bad thing. The other side of bias is values. Legislators will want AIs that reflect their values. A bigoted legislator will want an AI that reflects their values. We might not want to give it to them, but there’s a company out there that will.

We can’t remove bias from humans, and we’re not going to remove it from AI, either. AI serves corporate monopolies right now, and that’s not great. The solution for that is tech and AI regulation. People don’t like that, but the government needs to regulate the space. There’s a weird believe in Silicon Valley that the government can’t help, and that’s just plain false. Government regulation can prevent monopolies and unfettered corporate power in other areas – regulation can help in tech and AI, too.

AI and Cybersecurity

People often ask Bruce if AI will help the attackers or the defender more. In terms of vulnerability finding, it helps both, but it helps the defender more. People have put a lot of work into both AI attacking and defending. In the near term, Bruce feels that AI will benefit the defender because we’re already being attacked at near-computer speeds, so defending that fast will be powerful.

In the long term, though, cybersecurity is an arms race. Nobody can predict how it’s going to go. There are too many things that could happen, and tiny things could cause huge feedback loops or cascading effects. In terms of legislation and regulation, AI could change the way laws are written, the nature of lobbying, and even executive branch rule-making. We just don’t know right now.

In the end, will AI benefit the attacker or defender more? I don’t think we have any clue.

Bruce Schneier

How to Move Forward with Tech and AI Regulation

Another thing people regularly ask Bruce is what they can do to help move tech and AI regulation along. That’s a hard question, because it has to become a political issue. AI and tech regulation needs to be something that policymakers care about. We need to see floor debates.

President Biden’s executive order on AI is good, but it has the limits of an executive order. We need something with legislative power. The Cyber Security Review Board (CSRB) reviews cyber incidents and puts out decent reports, but doesn’t have subpoena power – that should be changed. We need all legislators to get involved.

In a lot of ways, we can look to Europe for an approach. They’re a regulatory superpower, and a good regulation in a large enough market can have huge effects. Bruce was working for IBM when the GDPR regulation came out, and IBM decided to implement those across the board because it was easier than identifying who was European.

A good regulation in a large enough market moves the planet.

Bruce Schneier

The strictest laws win. We don’t want the weakest laws against murder, we want the strongest. The craziness comes from lobbyists trying to tweak everything in their favor. Legislators need to care about tech and AI enough to regulate it. It has a lot of benefits, but we need strong laws to protect society from the potential abuses.

Learn more about Bruce Schneier and read his writing at schneier.com. He doesn’t do social media, but there is a Facebook page that mirrors his blog and a Twitter account that mirrors his book. You can join his email newsletter on his website or find his books wherever books are sold.

Related Articles

All
  • All
  • Easy Prey Podcast
  • General Topics
  • Home Computing
  • IP Addresses
  • Networking Basics: Learn How Networks Work
  • Online Privacy
  • Online Safety
Bruce Schneier talks about tech and AI regulation and why it's so important.

AI Regulation and Why We Need Laws About Tech

Very few people are ready and willing to say that what we need is more regulation. But…

[Read More]
Fastest VPNs of 2024

Top 10 Fastest VPNs of 2024: A Complete Guide

When it comes to choosing a VPN, speed is often a top priority for users – and…

[Read More]
Anonymously Stream Content with your Fire Stick

How to Safely and Anonymously Stream Content Using the Best VPNs for a Fire Stick

Imagine traveling back to the 1990s and explaining streaming services to someone. “Streaming is like cable, but…

[Read More]
Best VPN Trials for 2024

Choosing from the Best VPN Trials of 2024: Which One is Best?

Whether you are shopping for a VPN for the first time or you are ready to make…

[Read More]
Types of AI Models

Guide to Types of AI Models and How They Work

When you think of AI (Artificial Intelligence) models, you may automatically think of generative AI like OpenAI’s…

[Read More]
Andrew Costis talks about adversary emulation and why businesses should do it.

Adversary Emulation for Business Cybersecurity

Security risks are constantly changing. Projects start and end, employees leave and are hired, new tools replace…

[Read More]