RAG Techniques Won’t Stop Generative AI Models from Hallucinating

IBL News | New York

The technical approach of RAG (Retrieval Augmented Generation) reduces the AI models’ hallucinations, but it doesn’t fully eliminate the problem with today’s transformer-based architectures, writes TechCrunch in an article.

However, a number of generative AI vendors suggest that their techniques result in zero hallucinations.

Given that generative AI models have no real intelligence and are simply predicting words, images, speech, music and other data, sometimes they get it wrong, telling lies.

To date, hallucinations are a big problem for businesses looking to integrate the technology into their operations.

Pioneered by data scientist Patrick Lewis, researcher at Meta and University College London, and lead author of the 2020 paper that coined the term, RAG retrieves documents relevant to a question using what’s essentially a keyword search and then asks the model to generate answers given this additional context.

It’s most effective in “knowledge-intensive” scenarios while getting trickier with “reasoning-intensive” tasks such as coding and math, as it’s hard to retrieve documents based on abstract concepts.

RAG also lets enterprises draw their private documents in a more secure and temporary way, avoiding being used to train a model to allow models.

Currently, there are many ongoing efforts to train models to make better use of RAG-retrieved documents.
.