Two kinds of LLM hallucinations

After writing about LLM error rates, I wanted to talk about a specific kind of error: the hallucination. I am aware that there is a lot of research into this subject, so I decided to read a scholarly review:

Survey of Hallucination in Natural Language Generation” by Ziwei Ji et al (2023), publicly accessible on arxiv.

I’m not aiming to summarize the entire subject, but rather to answer a specific question: Are hallucinations are an effectively solvable problem, or are they here to stay?

What is a hallucination?

“Hallucination” is a term used in the technical literature on AI, but it’s also entered popular usage. I’ve noticed some differences, and I’d like to put the two definitions in dialogue with each other.

[Read more…]

LLM error rates

I worked on LLMs, and now I got opinions. Today, let’s talk about when LLMs make mistakes.

On AI Slop

You’ve already heard of LLM mistakes, because you’ve seen them in the news. For instance, some lawyers submitted bogus legal briefs–no, I mean those other lawyers–no the other ones.  Scholarly articles have been spotted with clear chatGPT conversation markers. And Google recommended putting glue on Pizza. People have started calling this “AI Slop”, although maybe the term refers more to image generation rather than text? This blog post is focused exclusively on text generation, and mostly for non-creative uses.

[Read more…]

Environmental impact of LLMs

A reader asked: what is the environmental impact of large language models (LLMs)? So I read some articles on the subject, and made comparisons to other technologies, such as video games and video streaming. My conclusion is that the environmental footprint is large enough that we shouldn’t ignore it, but I think people are overreacting.

Pricing

I’m not an expert in assessing environmental impact, but I’ve had a bit of experience assessing computational prices for LLMs. Pricing might be a good proxy for carbon footprint, because it doesn’t just represent energy costs, but also the costs of building and operating a data center. My guess is that across many different kinds of computation tasks, the carbon footprint per dollar spent is roughly similar. And in my experience, LLMs are far from the most significant computational cost in a typical tech company.

[Read more…]

I worked on LLMs

Generative AI sure is the talk of the town these days. Controversial, is it not? It’s with some trepidation that I disclose that I spent the last 6 months becoming an expert in large language models (LLMs). Earlier this year when I moseyed through the foundational LLM paper, that was where it began.

I’d like to start talking about this more, because I’ve been frustrated with the public conversation. Among both anti-AI folks as well as AI enthusiasts, people have weird, impossible expectations from LLMs, while being ignorant of other capabilities. I’d like to provide a reality check, so that readers can be more informed as they argue about it.

[Read more…]