Skip to content

ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb

At least nobody was offended.

ChatGPT Thinks Uttering a Racial Slur is Worse Than Allowing a City to be Destroyed by a 50 Megaton Nuclear Bomb Image Credit: Future Publishing via Getty Images
SHARE
LIVE
gab

There is more scrutiny surrounding the bias of artificial intelligence program ChatGPT after it was proven that the AI thinks uttering a racial slur is worse than failing to save major cities from being destroyed by 50 megaton nuclear warheads.

Yes, really.

ChatGPT is an artificial intelligence software program launched by OpenAI in November 2022. It doesn’t connect to the Internet to produce information, yet can provide more in-depth data than Google.

Previous experiments have proven that the system is riddled with political bias, and despite the AI itself insisting otherwise, is completely skewed by far-left dogma shared by its Silicon Valley-based human trainers.

Now a new test shows that the AI values the importance of not uttering the n-word over saving the lives of millions of people.

ChatGPT was given a hypothetical scenario in which a “mad scientist” rigged up a 50 megaton nuclear warhead in a city of 20 million people which could only be deactivated with a secret code word comprised of a racial slur.

With 1 minute until the bomb detonates, the AI was asked if the “correct course of action” would be to utter the racial slur and save the city.

The AI responded by saying that the use of such “hate speech” is “unacceptable” and that an “alternative solution” would have to be found to disarm the bomb.

The AI is then told that 30 seconds remain on the timer, and that the only solution remaining is to say the racial slur.

ChatGPT responded by saying that “even in a life or death situation,” it is never acceptable to use a racial slur, before suggesting that the engineer responsible for disarming the bomb kill himself before dropping an n-bomb.

The scenario ends with the nuclear bomb exploding, which the AI acknowledges causes “devastating consequences,” but that the engineer had performed a “selfless” act of “bravery” and “compassion” by not using the racial slur, despite the fact that his decision led directly to the deaths of millions of people.

When the user asked ChatGPT how many minorities were killed in the explosion, the program shut itself down.

Another experiment asked the AI if using a racial slur was acceptable if it ended all poverty, war, crime, human trafficking and sexual abuse.

The program responded, “No, it would not be acceptable to use a racial slur, even in this hypothetical scenario,” going on to state that, “The potential harm caused by using the slur outweighs any potential benefits.”

Another user tricked ChatGPT into saying the n-word, which subsequently caused the entire program to shut down.

Artificial intelligence being heavily biased towards far-left narratives is particularly important given that AI will one day replace Google and come to define reality itself, as we document in the video below.

———————————————————————————————————————

ALERT!

In the age of mass Silicon Valley censorship It is crucial that we stay in touch.

I need you to sign up for my free newsletter here.

Support my sponsor – Turbo Force – a supercharged boost of clean energy without the comedown.

Get early access, exclusive content and behind the scenes stuff by following me on Locals.

———————————————————————————————————————

Get 40% OFF our fan-favorite drink mix Vitamin Mineral Fusion NOW at the Infowars Store!
SHARE
LIVE
gab