Grok-3 Shows 94% Citation Hallucination: What Does That Imply?
https://www.mediafire.com/file/b2ii31ywhay74ha/pdf-10268-99765.pdf/file
In the high-stakes world of Large Language Models (LLMs), the latest buzz centers on reports that Grok-3 exhibits a staggering 94% citation hallucination rate in specific Retrieval-Augmented Generation (RAG) tasks