Summary of week 2/2024

Substack

I moved my weekly summary to Substack. I want to keep this space a bit more personal and “rough”.

AI

Recently, I’ve been researching hallucinations in LLMs and strategies to mitigate them. There is an entire world to explore, and I started wondering about the nature of creativity itself.

“Intrinsic” hallucinations contain information that contradicts the training and are, in that sense, false or misleading. “Extrinsic” hallucinations, on the other hand, are information that cannot be verified.

For example: “we discovered life on Mars” is false, but “life exists on Mars” could be true. Nobody ever said that life exists on Mars, but we may reach that conclusion with reasoning.

Can we say that having an innovative idea is having “extrinsic hallucinations”?