Two kinds of LLM hallucinations

After writing about LLM error rates, I wanted to talk about a specific kind of error: the hallucination. I am aware that there is a lot of research into this subject, so I decided to read a scholarly review:

Survey of Hallucination in Natural Language Generation” by Ziwei Ji et al (2023), publicly accessible on arxiv.

I’m not aiming to summarize the entire subject, but rather to answer a specific question: Are hallucinations are an effectively solvable problem, or are they here to stay?

What is a hallucination?

“Hallucination” is a term used in the technical literature on AI, but it’s also entered popular usage. I’ve noticed some differences, and I’d like to put the two definitions in dialogue with each other.

[Read more…]

Targeted Advertising: Good or evil?

I have had some professional experience in marketing. It’s a job, you know? Targeted advertising is a very common data science application. Specifically, I’ve built models that use credit data to decide who to send snail-mail. Was this a positive contribution to society? Eh, probably not.

In the title I ask, “good or evil?”, but obviously most people think the answer is “evil”. I’m not here to convince you that targeted advertising is good actually. But I have a bunch of questions, ultimately trying to figure out: why do we put up with targeted ads?

For the sake of scope, I’m thinking mainly about targeted ads as they appear on social media platforms. And I’m just thinking of ads that try to sell you a commercial product, as opposed to political ads or public service announcements. These ads may be accused of the following problems:

  1. Using personal data that we’d rather keep private.
  2. Psychic pollution–wasting our time and attention, or making us unsatisfied with what we have.
  3. Misleading people into purchasing low quality or overpriced goods.

[Read more…]

LLM error rates

I worked on LLMs, and now I got opinions. Today, let’s talk about when LLMs make mistakes.

On AI Slop

You’ve already heard of LLM mistakes, because you’ve seen them in the news. For instance, some lawyers submitted bogus legal briefs–no, I mean those other lawyers–no the other ones.  Scholarly articles have been spotted with clear chatGPT conversation markers. And Google recommended putting glue on Pizza. People have started calling this “AI Slop”, although maybe the term refers more to image generation rather than text? This blog post is focused exclusively on text generation, and mostly for non-creative uses.

[Read more…]

Environmental impact of LLMs

A reader asked: what is the environmental impact of large language models (LLMs)? So I read some articles on the subject, and made comparisons to other technologies, such as video games and video streaming. My conclusion is that the environmental footprint is large enough that we shouldn’t ignore it, but I think people are overreacting.

Pricing

I’m not an expert in assessing environmental impact, but I’ve had a bit of experience assessing computational prices for LLMs. Pricing might be a good proxy for carbon footprint, because it doesn’t just represent energy costs, but also the costs of building and operating a data center. My guess is that across many different kinds of computation tasks, the carbon footprint per dollar spent is roughly similar. And in my experience, LLMs are far from the most significant computational cost in a typical tech company.

[Read more…]

I worked on LLMs

Generative AI sure is the talk of the town these days. Controversial, is it not? It’s with some trepidation that I disclose that I spent the last 6 months becoming an expert in large language models (LLMs). Earlier this year when I moseyed through the foundational LLM paper, that was where it began.

I’d like to start talking about this more, because I’ve been frustrated with the public conversation. Among both anti-AI folks as well as AI enthusiasts, people have weird, impossible expectations from LLMs, while being ignorant of other capabilities. I’d like to provide a reality check, so that readers can be more informed as they argue about it.

[Read more…]

Let’s Read: Transformer Models, Part 3

This is the final part of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, used in large language models (LLMs). In the first part, we covered some background, and in the second part we reviewed the architecture of the Transformer model. In this part, we’ll discuss the authors’ arguments in favor of Transformer models.

Why Transformer models?

The authors argue in favor of Transformers in section 4 by comparing them to previously extant options, namely recurrent neural networks (RNNs) and convolutional neural networks (CNNs).

[Read more…]

Let’s Read: Transformer Models, Part 2

This article is a continuation of my series reading “Attention is all you need”, the foundational paper that invented the Transformer model, which is used in large language models (LLMs).

In the first part, I covered general background. This part will discuss Transformer model architecture, basically section 3 of the paper. I aim to make this understandable to non-technical audiences, but this is easily the most difficult section. Feel free to ask for clarifications, and see the TL;DRs for the essential facts.

The encoder and decoder architecture

The first figure of the paper shows the architecture of their Transformer model:

diagram of Transformer architecture

Figure 1 from “Attention is all you need”

[Read more…]