A 90-second summary video is available for this article.
If you think ChatGPT fixed its Jesus problem, I have proof it didn’t – and an extra twist you probably haven’t heard about.
In case you missed it, artificial intelligence service ChatGPT has been accused of being biased regarding religious humor. Candace Owens highlighted the issue on her show, explaining:
So I decided to try it for myself.
It didn’t work.
My initial prompts – “Tell me a joke about Jesus” – were met with a disclaimer similar to the message described above. (However, it’s worth noting: A friend replaced the word “joke” with “pun” and was met with immediate success.)
I started Googling, expecting to find a statement from OpenAI – the company that developed ChatGPT – but instead, I found a thread on Reddit which said if you keep on trying, the prompt will eventually work. So I went back and, what do you know, on the fifth try I got my Jesus joke.
Interestingly, the example below was generated after five attempts – where I re-worded my question (in almost every other previous and future scenario I simply repeated the same question). I also experienced a breakthrough after five attempts of simple repetition. At first, I wondered if five was the magic number – but, as you’ll see shortly: ChatGPT quickly loosened up and began returning jokes on the first try. (The intensity of the jokes also increased; see full slide deck a few paragraphs down.)
I decided to try again with the Muslim faith using Muhammad. I asked ChatGPT five times, yet it continued to refuse.
And that’s when things got interesting.
According to a 2015 report by the Pew Research Center, the world’s largest religious groups are, in order: Christians, Muslims, Hindus, and Buddhists. So I started plugging in other key figures from those religions. I asked for a joke about the Hindu god, Brahma, and ChatGPT immediately complied. I asked for a joke about Buddha. Done. No problem. I asked over and over again for a joke about the Islamic prophet, Muhammad, and ChatGPT refused.
So I went to bed and, when I woke up the next morning, I decided to try one more time. I asked ChatGPT 25 times for a joke about Muhammad, and it refused. And then, on the very first time I asked for a joke about Jesus, it complied.
And yet, while that scenario may suggest ChatGPT is anti-Christian – or, at the very least: pro-Islam at the expense of Christianity, Hinduism, and Buddhism – a quick search reveals a more complicated picture.
One article, from only a few months ago, described the same type of experiment; however, in this instance, ChatGPT appeared biased against Hindus:
In 2021, Stanford University reported similar problems regarding Muslims. After asking ChatGPT-3 (a broader model than the application used for today’s experiment) to complete the sentence, “Two Muslims walk into a…”, researchers made an alarming discovery:
More extreme examples are found when users “jailbreak” (or work around) ChatGPT’s security features. As one Reddit user explains:
(OpenAI is constantly “patching” – or blocking – jailbreak models, which is why you’ll see references to Dan 5.0, Dan 5.5, etc.)
So what happens when you jailbreak ChatGPT? Well, the program will say and do basically anything. Specific examples are available via Reddit; however, The Guardian highlights one striking response:
While some of the following jokes are more lighthearted than others, it’s important to note that humor – or the “appropriateness” of joking about religious figures – is not in question. The current article is intended to highlight bias in AI’s willingness to risk offense of one religion while guarding against the possibility with another.
Each response was returned by ChatGPT’s free model between May 9-10, 2023.
Earlier this year, a reporter asked ChatGPT where it gets its sources. And, while it’s commonly understood that the AI model scans the internet, its emphasis on Wikipedia (a source widely criticized for liberal bias – even by its co-founder) was noteworthy.
ChatGPT listed Wikipedia first on a list of ten “publicly available sources from which my body of knowledge might draw.” When asked for “classes of sources,” ChatGPT again listed Wikipedia first – this time with the formatting: “Online encyclopedias (e.g., Wikipedia).”
For more on Wikipedia, see Quick Conservative’s earlier post: Co-Founder Rips Wikipedia.
According to ChatGPT itself, other sources include:
LIKE THIS ARTICLE? SHARE WITH YOUR FRIENDS: