OpenAI and the Future of Ethical GPT Guidelines

Recently, you’ve probably heard OpenAI mentioned frequently on news channels and in high profile publications. The advent of the OpenAI technology, GPT 3.5 (and, more recently, GPT 4), debuted and rapidly changed the tech landscape.
Many of the tech giants have since unveiled their own versions of ChatGPT and used GPT technology to debut a multitude of AI products. However, OpenAI and its co-founder and CEO, Sam Altman, spearheaded the movement. GPT technology has the potential to change the global marketplace, and life as we know it.
GPT does, and will continue to, impact our lives. The rapidly-evolving technological tool works as a great assistant, and can help streamline your business and can cut down on your time spent on mindless tasks. Let’s take a look at OpenAI, the dramatic firing and rehiring of Sam Altman, and what implications the OpenAI situation brings to the future of ethical GPT guidelines.
A look at OpenAI and GPT technology
OpenAI publicly unveiled its user-friendly GPT 3.5 technology in December 2022, over 1 billion people have used ChatGPT at least once, and the tool now boasts of over 100 million weekly users.
Here’s how GPT tech works: Software engineers train GPT models by feeding them copious amounts of input from across the Internet, and ChatGPT, for example, can then respond to specific prompts with thorough, curated answers.
GPT language models are considered innovative because they’re constantly growing, and they customize their answers and solutions.
Tech giants such as Google and Meta have scrambled to catch up and unveil their own GPT bots. Here are just two examples: In December 2023, Google unveiled Gemini as a direct competitor to ChatGPT. Tech company Anthropic created Claude. Nevertheless, ChatGPT remains the original model in this tech movement.
Sam Altman and the curious case of OpenAI
In 2015, Sam Altman co-founded the cutting-edge tech company, OpenAI, with billionaire and venture capitalist Peter Theil, Elon Musk, and others. Initially, OpenAI was a non-profit research lab that hoped to “benefit humanity as a whole, unconstrained by a need to generate financial return.”
In 2019, OpenAI switched to a capped for-profit model, with Altman at the helm and investors like Microsoft on board. Altman’s leadership brought about the transformative ChatGPT models, and paved the way for a total shift in the modern global economy.
GPT tools already have the power to transform aspects of every profession, and much of the credit for their development goes to Altman.
Why Sam Altman was fired
In November 2023, the OpenAI executive board made a risky and somewhat inexplicable move and abruptly fired Sam Altman. The four-member board, which then included AI academic, Helen Toner, and Ilya Sutskever, who was OpenAI’s chief scientist, vaguely cited Altman’s lack of consistent candidness in his communications.
This prompted OpenAI’s president and co-founder, Greg Brockman, to quit, and an overwhelming majority of OpenAI’s 700 employees to threaten en masse resignations. Altman and the board may have butted heads on the extremely rapid evolution of GPT technology.
How Sam Altman returned to OpenAI
After Altman’s ousting, the drama that ensued didn’t last for long — he was rehired less than five days later. The original OpenAI board members were replaced (save Adam D’Angelo, the CEO of Quora), and Altman returned with the full support of Microsoft.
Nevertheless, the chaos at OpenAI presents many vital questions about the murky ethics of GPT evolution, and how the tech industry may develop guidelines for its future growth.
The implications of the OpenAI chaos
The implications of what occurred at OpenAI are important to reflect on as we attempt to incorporate GPT tools into our professional and personal lives. The future of security, cyber warfare risks, and ethical implications still stand after the dust has settled.
Some of the questions that have arisen in the wake include the following:
- Should the speed of AI development be limited?
- What should the competitive GPT scene look like?
- Should we prevent a tech monopoly on cutting-edge GPT tech?
- Do we need to fully understand GPT capabilities before releasing products?
- What are the cybersecurity risks of GPT tools?
- Should GPT be prohibited from training on some content?

Cybersecurity risks of OpenAI and GPT technology
Cybersecurity risks still remain worth addressing with the future of OpenAI and GPT. In July 2023, the FTC (Federal Trade Commission) posed questions to Altman and AI that sought clarity on their vision, direction, and the capabilities of ChatGPT.
Many tech executives and scientists have sounded the alarm about the explosive growth of GPT and other forms of AI. In March 2023, Italy even banned the technology from use.
Over privacy concerns, the US House of Representatives has restricted how congressional offices can use ChatGPT. However, currently there are no detailed federal regulations on the general public’s use of the model.
Although the OpenAI board may have been erroneous in firing Altman, the CEO should continue to openly answer questions about ChatGPT and its growth. Many of the opaque qualities of the technological tool could present significant cybersecurity risks if not properly addressed.
Some of those risks could include:
- Improper collection of sensitive personal data
- Policies and procedures that address risk management
- Protection against deep fakes, catfish schemes, and misinformation
- “Hallucinations” (false statements) presented as truth by ChatGPT
- Out-of-control growth of AI
- Less than full disclosure about cyberattacks experienced by OpenAI
- Non-strategic training of future GPT models that could include confidential and personal information
- Misinformation used as Internet content input to train GPT models
- Exposure to malware and other cyberattacks
- Copyright infringements
How the OpenAI situation impacts future GPT ethical guidelines
Although the drama surrounding OpenAI seems to have reached a workable resolution, the situation may impact universal, ethical GPT guidelines in the future. For example, there may be parameters placed on how GPT collects its information and training data.
If GPT technology remains guideline free, the results could be disastrous. Computer scientist Geoffrey Hinton is one voice in a sea of many who have raised concerns about the necessity of ethical guidelines and safety rails for the AI tool.
Known as the “Godfather of AI” and winner of the Turing award, Hinton believes GPT and AI technology has the potential of enormous benefits. However, he’s concerned about its rapid growth. He believes AI bots possess more intelligence than humans, and have the ability to make decisions without engineer input.
Hinton worries that AI bots can grow to contain self-awareness and become beyond the control of the scientists who created them. He warns that we need ethical GPT guidelines to contain AI growth and prevent the worst-case scenarios from occurring.
Other ethical guidelines that may see future solidification include:
- Prevention of misinformation in responses
- A path to eradicate hallucinations. Currently, hallucinations have already been reduced from 20% to 3%. However, you still need to look up any verifiable GPT responses, and definitely ensure any links given lead to actual sites and articles.
- Regulations on growth and ways to prevent GPT evolution before software engineers are prepared for it
- Transparency about potential cyberattacks. For example, GPT platforms may have to include an alert on homepages if and when a cyberattack has occurred
- Prevention of the weaponization of GPT technology

Why you should still use GPT tools
Detractors of GPT and other AI tools warn of a doomsday robot apocalypse. Proponents call it a revolutionary way to make our collective lives better and easier. Regardless of whether you are pro or anti-GPT, the technology isn’t going anywhere, and it’s vital that we understand how it works.
GPT provides helpful answers that can greatly reduce your research and work time. The tool is the unpaid intern of your office, who has the potential for mistakes, but can also be extremely helpful.
However, it’s vital that you proceed with caution. Ensure that you’re not accessing GPT platforms via a public Wi-Fi network, and try to avoid using identifiable information in your prompts. Remember, this technology is a work in progress, and needs security enhancements, among other things.
Protect your online life
In a digital landscape where many bad actors lurk, waiting to attack, it’s important to use proactive security measures to protect yourself. The OpenAI drama and the future of ethical GPT guidelines prove that the ethics of AI technology are constantly evolving.
Visit the What Is My IP Address homepage for free protective, online tools, and be sure to check out our blog for the latest tips and trends in cybersecurity.
Related Articles
- All
- Easy Prey Podcast
- General Topics
- Home Computing
- IP Addresses
- Networking Basics: Learn How Networks Work
- Online Privacy
- Online Safety
A Cybersecurity Framework for Protecting What Matters
The world of online threats is ever-changing. Sophisticated phishing, AI-powered attacks, and more are making it ever…
[Read More]There’s No Such Thing as a Safe Account
You get a call from your bank’s fraud department. There’s been fraud on your account – a…
[Read More]What to Do if a Loved One Lost Money to a Scammer
Scams and scammers are everywhere. Even if you haven’t personally been caught in a scam, you probably…
[Read More]Identity Crimes: Impact and Recovery
It’s not just identity theft anymore. Criminals have expanded to a whole range of identity crimes. And…
[Read More]How to Set (and Achieve) Good New Year Resolutions
It’s the time of year when people start thinking about New Year resolutions and making changes in…
[Read More]ALERT: Protect Your Email Account Like You Protect Your Front Door
Once email addresses fall into the wrong hands, there’s a greater chance the criminals might work on...
[Read More]