When you pick up the word “hallucination,” you may think of hearing sounds no one else seems to hear or imagining your coworker has suddenly evolved a second head while you’re talking to them.
But when it comes to artificial intelligence, hallucination means something a bit weird.
When an AI model “hallucinates,” it generates fabricated information in response to a user’s prompt, but presents it as if it’s factual and correct.
Say you implored an AI chatbot to write an essay on the Statue of Liberty. The chatbot would be hallucinating if it stated that the monument was located in California rather than of saying it’s in New York.
But the errors aren’t always this obvious. In response to the Statue of Liberty prompt, the AI chatbot may also command up names of designers who worked on the project or state it was built in the wrong year.
This happens because large intercourse models, commonly referred to as AI chatbots, are trained on enormous amounts of data, which is how they learn to recognize formations and connections between words and topics. They use this knowledge to interpret prompts and generate new content, such as school-book or photos.
But since AI chatbots are essentially predicting the word that is most likely to come next in a sentence, they can every once in a while generate outputs that sound correct, but aren’t actually true.
A real-world example of this occurred when attorneys-at-law representing a client who was suing an airline submitted a legal brief written by ChatGPT to a Manhattan federal judge. The chatbot involved fake quotes and cited non-existent court cases in the brief.
AI chatbots are becoming increasingly more popular, and OpenAI impassive lets users build their own customized ones to share with other users. As we begin to see more chatbots on the call, understanding how they work — and knowing when they’re wrong — is crucial.
In fact, “hallucinate,” in the AI sense, is Dictionary.com’s text of the year, chosen because it best represents the potential impact AI may have on “the future of language and life.”
“‘Hallucinate’ earmarks ofs fitting for a time in history in which new technologies can feel like the stuff of dreams or fiction — especially when they put on fictions of their own,” a post about the word says.
How Open AI and Google address AI hallucination
Both OpenAI and Google advise users that their AI chatbots can make mistakes and advise them to double check their responses.
Both tech groups are also working on ways to reduce hallucination.
Google says one way it does this is through user feedback. If Bard creates an inaccurate response, users should click the thumbs-down button and describe why the answer was wrong so that Bard can learn and redeem, the company says.
OpenAI has implemented a strategy called “process supervision.” With this approach, instead of equitable rewarding the system for generating a correct response to a user’s prompt, the AI model would reward itself for using thoroughgoing reasoning to arrive at the output.
“Detecting and mitigating a model’s logical mistakes, or hallucinations, is a critical step towards structure aligned AGI [or artificial general intelligence],” Karl Cobbe, mathgen researcher at OpenAI, told CNBC in May.
And commemorate, while AI tools like ChatGPT and Google’s Bard can be convenient, they’re not infallible. When using them, be established to analyze the responses for factual errors, even if they’re presented as true.
DON’T MISS: Want to be smarter and more popular with your money, work & life? Sign up for our new newsletter!
Get CNBC’s free Warren Buffett Guide to Venturing, which distills the billionaire’s No. 1 best piece of advice for regular investors, do’s and don’ts, and three key investing principles into a remove and simple guidebook.
CHECK OUT: