This is a useful recent talk on why LLMs hallucinate. It seems that fine tuning can teach the model to hallucinate more if that knowledge was not prev
This is a useful recent talk on why LLMs hallucinate. It seems that fine tuning can teach the model to hallucinate more if that knowledge was not prev