/
Study finds ChatGPT unreliable in increasing AI dependent world

Study finds ChatGPT unreliable in increasing AI dependent world


Study finds ChatGPT unreliable in increasing AI dependent world

It may be easy to ask ChatGPT or another chatbot a question, but the answer might not be right. The wrong answers are called hallucinations, and they're more common than one might think.

According to Study Finds, scientists at Deakin University in Australia asked ChatGPT to write literature reviews on six different mental health topics, and more than half of the references it came up with were wrong or entirely made up.

The version of the chatbot that was studied was GPT-4o, which found that the accuracy of information varied from topic to topic. It was also found that some of the fake citations included digital object identifiers (DOIs) that linked to real sources but were unrelated to the topic being searched.

Dr. Jason Baker, professor and senior technology strategist at Regent University, says that Chatbots are trained to provide answers, not admit uncertainty. They are rewarded for generating plausible text, even if it's incorrect.

“It's asked a question and has to produce an answer, and instead of responding truthfully, it makes up something that, as far as it's concerned, seems to be correct but is not,” informs Baker.

The errors are called hallucinations, and Baker says that is what happens when computers get creative.

Baker, Dr. Jason (Regent University) Baker

“This concept of hallucinations is actually a feature, not a bug. The engineers, the technologists, the programmers, and the mathematicians figured out a way to essentially introduce creativity into these systems,” explains Baker.

The errors can be annoying when one is doing homework or other research. They can be catastrophic when AI is helping diagnose an illness or prescribing a drug, for example. Baker says that it is essential to check the answers that are given.

“We're likely to see much more reliable answers in the future, but right now, if you simply ask the system to double-check or triple-check itself or ask one system to double-check or triple-check another system, you get better answers,” advises Baker.

However, it is important to remember that there are reputations on the line, and blaming a chatbot for an error is the equivalent of saying the “dog ate my homework.”

“Anything that a professional puts out under their name — it is their name that provides some degree of reputation,” states Baker.