It’s clear that companies are currently unable to make chatbots like ChatGPT comply with EU law, when processing data about individuals. If a system cannot produce accurate and transparent results, it cannot be used to generate data about individuals. The technology has to follow the legal requirements, not the other way around.
When they hallucinate, they don’t do it consistently, so one option is running the same query through multiple times (with different “expert” base prompts), or through different LLMs and then rejecting it as “I don’t know” if there’s too much disagreement between them. The Q* approach is similar, but baked in. This should dramatically reduce hallucinations.
Edit: added bit about different experts