New Artificial Intelligence Technology Leads to New Challenges
Most of us would have thought artificial intelligence was limited to science fiction just a few years ago – but now, AI has become almost mainstream. Businesses of all kinds use artificial intelligence to help analyze patterns, make decisions, and more. With all the new artificial intelligence applications, though, come new challenges and dangers.
See Is AI Going to Take Over the World? with Daniel Hulme for a complete transcript of the Easy Prey podcast episode.
Daniel Hulme has a PhD in computational complexity and works in the field of artificial intelligence, applied technology, and ethics. He is a TEDx speaker and an educator at Singularity University. In addition, he is the CEO and founder of Satalia, a company that designs and builds AI solutions for business.
Ever since he was a child, Daniel has been interested in what it means to be human. As he grew older, he also became interested in tinkering with computers. Artificial intelligence is right at the intersection between those two interests. He earned his undergraduate degree in Computer Science with Cognitive Science – or, as he describes it, AI before it was cool. His Master’s degree, PhD, and postdoctoral work have all been working with new artificial intelligence.
When you start to ask yourself what it means to be conscious or if we can upload our consciousness to a machine, these are in the realm of AI.Daniel Hulme
What is an Artificial Intelligence?
When many of us hear “AI,” our minds immediately jump to self-aware machines, Skynet, and other science fiction ideas. To make it even more complicated, sometimes any new and advanced technology is described as AI, even if it isn’t actually an artificial intelligence.
Daniel likes to define “intelligence” as “goal-directed adaptive behavior.” An artificial intelligence, then, would be a computer that could make decisions, learn if those decisions were good or bad, and then adapt so that it could make better decisions next time. This idea can be used for many different types of new artificial intelligence technologies.
Most new artificial intelligence systems aren’t adapting yet. They’re still in the automation and imitation phases. AI is automating some human tasks, such as reading language and recognizing images. It can even imitate human beings, such as with chatbots. We can also get new artificial intelligence to analyze data and extract complex insights that can help us understand the world better.
Ethical Questions of New Artificial Intelligence
It’s a recent trend for businesses to use people’s digital footprint (their emails, Slack, Zoom, etc.) to understand more about them. With the pandemic, things that used to be in the physical world became digital. New artificial intelligence can look at that data and extract insights about people’s skills, aspirations, careers, relationships, and more.
We can extract insights now that should be scrutinized from an ethical perspective.Daniel Hulme
These insights can get incredibly detailed. You can use new artificial intelligence technology to identify people in your company who are having an affair, or find out who is planning to quit long before they turn in their notice. This obviously raises some questions about AI ethics.
No Such Thing as AI Ethics
Daniel thinks there is no such thing as ethical challenges with AI. Most of the challenges around new artificial intelligence are, instead, safety problems. Ethics is the study of right and wrong. But what actually needs studied with AI is intent.
Imagine you are on the ethics board of a ride-sharing company. Your company has recently deployed an AI whose goal is to set prices and maximize profits. The AI realized that when someone’s phone battery is low, they are willing to spend more on a ride.
The AI has identified a vulnerability in humans, and is using it to accomplish the goal your company set for it: maximizing profits. You, on the ethics committee, need to figure out if you’re comfortable and happy with exploiting that vulnerability. You don’t have to use use that battery data to get people to pay more for rides. Instead, you could use it to prioritize rides for vulnerable people whose phones are about to die.
It’s not about the data that you keep, it’s about how you intend to use that data. An artificial intelligence can’t form the intent – its intent is only to complete its programming. What matters is the intent of the human behind the new artificial intelligence.
New Artificial Intelligence Risks
The danger of new artificial intelligence technology is not in that it learns and adapts. The challenge and the danger is putting frameworks, boundaries, and guidelines around it. Daniel likes the example from Nick Bostrom about an AI that was built to produce paper clips. This AI’s entire goal was to produce paper clips. It could adapt to accumulate all the resources of nature and turn all of them int paper clips, thereby destroying humanity. It is accomplishing the goal you gave it – make paper clips – but it adapted in dangerous ways since you didn’t give it limitations.
When it comes to software, we have well-established steps to test it to make sure it behaves how we wanted. With new artificial intelligence, though, you don’t program it, you teach it. It can adapt in ways we can’t predict. This makes it difficult to test. That’s how failures and risks happen – not from ethical questions, but from the difficulty of safety and testing.
All machine learning … tries to generalize the world. By virtue of that, it is biased. They will often get things wrong, but they’re not behaving unethically.Daniel Hulme
Risks of New Artificial Intelligence in Decisions
When a new artificial intelligence’s job is to make decisions, not understanding how the decision is being made can lead to problems. For example, a military wanted to train an AI to distinguish their tanks from enemy tanks. They showed it lots of pictures of their tanks and of enemy tanks. When the AI got good at distinguishing them, they put the AI in a drone and sent it out to enemy territory. The AI drone bombed every tank. It turns out the AI was telling the difference between friendly and enemy tanks not based on the tanks, but based on the color of the sky in the photos.
For a business example, say you have an ice cream shop. You gave an AI the past two years of weather data and the past two years of ice cream sales and asked it to predict how much ice cream you’ll sell based on tomorrow’s weather. Then in a freak event, tomorrow is the hottest day on record. Your AI will probably predict you’ll sell a lot of ice cream. In reality, you may end up selling no ice cream because it’s too hot for anyone to go outside to visit your shop.
A lot of organizations adopt new artificial intelligence to make decisions without having any checks and balances. When an AI is making decisions, you have to have humans in the process. Humans can spot outliers or times when the AI isn’t quite right. It’s essential to have a human being look at and interpret the patterns and recommendations the AI provides.
Organizations just go blindly into it and try to adopt or embrace these kinds of emerging technologies without really thinking, is that the right approach to solve my problem?Daniel Hulme
The Six Singularities of New Artificial Intelligence
New artificial intelligence technologies will have a great impact on society. Whether it will be a glorious future or the end of humanity is still up for debate. Daniel thinks the rapid growth of new artificial intelligence applications is leading society towards six different singularities. (A singularity is a point in time we can’t see beyond – once that thing happens, we have no idea what will happen next.) Each of them will have dramatic and unknowable consequences to society.
The way that they world is set up is actually to accelerate these different singularities.Daniel Hulme
The Political Singularity
In the political singularity, new artificial intelligence will make it so we cannot determine what is true. Deepfakes, chat bots, and misinformation bots will challenge political foundations and the very fabric of what we believe is real. AI can be used in scams, such as deepfaking a CEO’s voice to get someone to pay a fraudulent invoice. In a political singularity, we will lose faith in the authenticity of what we’re engaging with.
The Economic Singularity
In the economic singularity, so many jobs will be taken over by AI that there will be mass technological unemployment. It could cause unrest. Daniel thinks it might not be so bad, though – there is a hypothesis that new artificial intelligence could take the friction out of creating and distributing goods. It could create a world where everyone has access to necessities for free, so they have the economic freedom to create all kinds of new innovations.
The Social Singularity
In the social singularity, new artificial intelligence advances medical technology so far that we cure death. We’re already seeing technology that can monitor our health and vastly increase our lifespans. An AI might be able to monitor things and make decisions to keep you alive indefinitely.
The Technological Singularity
In the technological singularity, a new artificial intelligence is created that is a superintelligence – a brain smarter than us in every conceivable way. It is the last invention that humanity will ever create. Some think it will be the most glorious thing to happen to us, and others think it’s humanity’s biggest existential threat.
The Environmental Singularity
In the environmental singularity, new artificial intelligence and other technologies allow us to produce and consume so much that we cause uncontrollable ecological collapse. Daniel doesn’t think this one is inevitable, however. There is a lot of possibilities for technology to solve some climate change problems.
The Legal Singularity
In the legal singularity, new artificial intelligence makes surveillance ubiquitous and inescapable. A small handful of companies and government know so much about you that they can effectively manipulate your behavior and decisions to accumulate wealth and power.
The Impact of New Artificial Intelligence on Employment
Many creative and content-creation jobs have not yet been affected by AI. That may happen in the future, though, as new artificial intelligence gets better at imitating human behavior. Many less creative jobs will be impacted, though, and likely soon.
There is genuine concern that new artificial intelligence will create job losses. Daniel isn’t sure when, but potentially it will begin happening in the next decade or so. In every industrial revolution we’ve had, there have been job losses but people have been able to retrain and do something different. There is a concern now that we can build AIs faster than we can retrain people.
Daniel thinks we may see new economic models. With AI, humans wouldn’t be necessary to grow and distribute food, so it wouldn’t be necessary to charge money for food. The situation with other necessities of life would be similar. Perhaps instead of paying for a haircut, you spend the time while you’re getting a haircut doing something useful. Instead of paying $15 for Netflix, maybe you do $15 worth of work. People have complained to Daniel that this sounds like communism or socialism. But it’s not taxing people and redistributing the money, it’s getting rid of money entirely. Is it still socialism if money doesn’t exist anymore?
In Star Trek, people got food to eat by telling the replicator to make them food. Nobody had to pay the replicator to make the food. Humans were judged not on their ability to generate and accumulate money, but by how much they’re contributing to the rest of humanity. Daniel thinks that’s a pretty good cycle to be in.
- Easy Prey Podcast
- General Topics
- Home Computing
- IP Addresses
- Online Privacy
- Online Safety
As humans who live in societies with other humans, we have to be aware of the fact…[Read More]
Most of us would have thought artificial intelligence was limited to science fiction just a few years…[Read More]
Technology is advancing rapidly. It seems like every day something that used to be the purview of…[Read More]