ChatGPT, LlamaIndex, and hallucinations

A new study on ChatGPT for document-grounded response generation in information-seeking dialogues reveals that ChatGPT doesn’t get significantly better at giving answers when it uses local documents with specialistic knowledge. When the knowledge was already present in the LLM, using LlamaIndex with local documents doesn’t add value.

Of course, this doesn’t mean that local knowledge is useless. Knowledge not included in the training data can be picked up and used.

That suggests that it probably doesn’t make sense to enhance LLMs with a local copy of publicly available documentation, and such a solution should focus on private knowledge and secrets.