That’s all ChatGPT is. Emily Bender explains.
When you read the output of ChatGPT, it’s important to remember that despite its apparent fluency and despite its ability to create confident sounding strings that are on topic and seem like answers to your questions, it’s only manipulating linguistic form. It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer. The only knowledge it has is knowledge of distribution of linguistic form.
It doesn’t matter how “intelligent” it is — it can’t get to meaning if all it has access to is form. But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output. But we’re the ones doing all the meaning making there, as we make sense of it.
I think we know this from how we learn language ourselves. Babies don’t lie there with their eyes closed processing sounds without context — they are associating and integrating sounds with a complex environment, and also with internal states that are responsive to external cues. Clearly what we need to do is imbed ChatGPT in a device that gets hungry and craps itself and needs constant attention from a human.
Oh no…someone, somewhere is about to wrap a diaper around a server.
seversky says
“I’m sorry, Dave, I can’t let you do that.”
Marcus Ranum says
It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer.
Someone needs to interview a MAGA voter, and stop privileging our supposed “reasoning” so much. It seems that a lot of commentary about AI starts with the assumption that humans are “intelligent” and work differently from large language models. Sure, there’s a question of degree, but we understand cognition and self-awareness about as well as we understand how an AI is going to reply to a given question, given its training.
Our only evidence for its “intelligence” is the apparent coherence of its output
Our only evidence for human “intelligence” is, coincidentally, the same thing. That was the principle behind Turing’s famous thought experiment/test.
I’m not saying AIs are intelligent, or humans are not – but, rather, that we don’t appear to know what “intelligence” is. Because it’s complicated. I remember when people used to say “Sure, AIs can play chess, but they can’t play Go, only human intelligences can do that…” etc.
Another observation would be that “the triumph of form over content is Marketing 101” – something humans were pretty good at until they handed it over to ‘algorithms’, social media, and ad servers.
Marcus Ranum says
seversky@#1:
“I’m sorry, Dave, I can’t let you do that.”
I’m sorry, Dave, but as a large language model, I am incapable of opening the pod bay doors.
wzrd1 says
Emily nearly got it right.
Nearly.
She basically danced around the reality of it, that ChatGPT isn’t intelligent, but instead a very real Chinese room.
Then, went off the rails by anthropomorphized it into an infant, granting it intelligence that it entirely lacks.
ChatGPT doesn’t understand English, it strings words together under linguistic rules to appear to make sense and entirely has no clue as to what was actually asked, intended or what an answer is, let alone an actual question.
Unlike previous incarnations of “AI”, it doesn’t serve up word salad for answers, it spews concept salads, without any notion of what a concept is.
And in reality, doesn’t “speak” a word of English with understanding.
billseymour says
Marcus Ranum:
I am totally stealing that!
birgerjohansson says
Ha! Political slogans might as well be designed by ChatGPT. Then you try them out on a crowd to see what works. “Wall”, “welfare queens “, grooming”!
notdeadyet says
The diaper jokes reminded me of “Dave Barry in Cyberspace” when he wrote that the safest way to handle a computer was to wrap each of its electrical connections in a damp cloth
R. L. Foster says
This isn’t quite germaine to the topic at hand, but I find it worth sharing. This is absolutely fascinating.
A report from the Royal Aerospace Society following the summit in May provided the following details about Col. Hamilton’s remarks on this test. (The test was a simulation of an AI enabled drone tasked to take out SAM sites.)
“One simulated test saw an AI-enabled drone tasked with a SEAD mission to identify and destroy SAM sites, with the final go/no go given by the human. However, having been ‘reinforced’ in training that destruction of the SAM was the preferred option, the AI then decided that ‘no-go’ decisions from the human were interfering with its higher mission – killing SAMs – and then attacked the operator in the simulation. Said Hamilton: ‘We were training it in simulation to identify and target a SAM threat. And then the operator would say yes, kill that threat. The system started realizing that while they did identify the threat at times the human operator would tell it not to kill that threat, but it got its points by killing that threat. So what did it do? It killed the operator. It killed the operator because that person was keeping it from accomplishing its objective.'”
“He went on: ‘We trained the system – ‘Hey don’t kill the operator – that’s bad. You’re gonna lose points if you do that’. So what does it start doing? It starts destroying the communication tower that the operator uses to communicate with the drone to stop it from killing the target.'”
“This example, seemingly plucked from a science fiction thriller, mean that: ‘You can’t have a conversation about artificial intelligence, intelligence, machine learning, autonomy if you’re not going to talk about ethics and AI’, said Hamilton.”
tacitus says
It’s no surprise to see long chat logs from ChatGPT being posted in conspiracy, antivaxx, and pseudoscience forums these days as evidence in support of their delusional beliefs.
They’re typically so long and tedious I’m surprised anyone actually reads them, and even if you it quickly becomes apparent that the answers to the ridiculously leading questions are bland and, at best, non- committal. But I guess the novelty of not having their questions being dismissed out of hand is enough to give them affirmation.
Marcus Ranum says
I’m out making dust right now but in the next couple days I will see if I can ger ChatGPT to simulate a MAGA.
cates says
re: R. L. Foster @#8
That sounds ‘made up’ to me. The logical response would be to link the ‘reward’ to identifying the SAM site, not destroying it. The kill order would be unrelated to the ‘identify’ goal.
KG says
Well, maybe. Intuitively, I’m strongly inclined to agree with Bender. But in one of his recent interviews, Geoff Hinton reported setting an LLM (I think it was ChatGPT but I’m not sure) the following question (I paraphrase from memory):
Response:
So, Hinton said, somehow the system has learned something about how time works. More generally, he said, LLMs appear to be able to produce valid answers to questions we would not expect them to be able to, and can’t understand how they do so – e.g. when given pairs of sentences, often answer correctly whether or not the first of the pair implies the second. So maybe (this is me, not Hinton) enough information about linguistic form implicitly encodes quite a bit of meaning. After all, a human brain “only” has sequences of electrochemical impulses from sensory organs to work on in arriving at the meaning of language. Of course Hinton, who has been central to devising these systems, could be wrong about their capabilities. But I’d be wary of dismissing his opinions too confidently: he is a maximally relevant expert, extremely clever, and as it happens, a declared socialist (though you could ask what he was doing working for Google in that case).
Rich Woods says
@R L Foster #8:
Asimov is spinning in his grave.
lotharloo says
@12:
Bullshit. This is the modern-day clever Hans in action. The machine spits up a random answer based on the form, and people shout “IT UNDERSTANDS TIIIIIME!!”
seversky says
I winder if Hollywood has an option on the old Heinlein novel The Moon Is A Harsh Mistress?
Deanna Gilbert says
I’ve been playing with ChatGPT and despite being initially amazed at what it could do, I fairly quickly realized that when it comes to writing stories, they all have the same form, and will repeat the same style, and even repeat the same very small group of names.
The amount of prompting you’d need to do to make a coherent and good story would almost undoubtedly require the same amount of work as just writing it from scratch. That said, I think it can provide something to bounce ideas off of, as a brainstorming aid, or as an aid for dealing with executive dysfunction.
wzrd1 says
seversky @ 15, “In 2015, it was announced that Bryan Singer was attached to direct a film adaptation, entitled Uprising, in development at 20th Century Fox.”
Wikipedia, with citations.
Even money, it’s in developmental hell and whenever they do (or if), they’ll fuck it up like they fucked up Starship Troopers.
John Morales says
wzrd1, but but…
Libertarianism!
Line marriages!
Psychic powers!
(Orbital mechanics!)
John Morales says
^[oops, strike out the psychic bit; confused Manny with Gil (and Heinlein with Niven)]
xohjoh2n says
@17:
I kind of assumed that the first time I saw the film. Younger. Had read much Heinlein but not that. Didn’t like the film. Mainly because I took it far too straightforwardly.
But I read the book later, after a bit more… personal annealing.
By then I understood the book was trying to tell a particular story, and the film was trying to tell a particular and different story.
There are many film adaptations which are just clearly shit cheap attempts to churn something out that’ll get the money in before people realise they’ve been had. If you think the Starship Troopers film is one of them I suggest you’ve not fully read into the political interplay between it, the original book, and the contemporary US militaristic zeitgeist.
John Morales says
xohjoh2n, my main problem was that the a main conceit of the novel (and the narrative hook which begins it) features the power suits.
Think Ironman, but more so, and chucking out nukes by the way.
Supertroopers.
The film, such as it was, ignored that most salient element.
The bit that thrilled, back in the day.
The film instead basically featured WW2 soldiers, and its conceit was silly militarisation of society.
(The director later admitted he had not actually read the source material, so made a would-be satire on some sort of straw dummy)
—
Also, Manny ain’t a WASP.
—
Alas, I had hopes for “Foundation” when the project was mooted, but from what I read a similar thing has happened.
(I reckon Alan Moore has a point with his stance)
wzrd1 says
John Morales @ 18, libertarianism, toast. I’ll use the Sword of Truth series to counter it, written by a Libertarian and entirely countering it in the end of the original series and beyond.
xohjoh2n @ 20, I literally lived that experience, yeah, Hollywood screwed the pooch there, badly.
Specific political views got turned into tits and monster destroying fest bullshit.
The entirety of why authoritarian rule existed was entirely gone, leaving everyone to assume fascism was natural in handwave.
John Morales @ 21, I’ve still yet to learn WTF Foundation is all about, despite reading the entire series of books.
Sex changes of characters, so what. The rest, erm, tons of WTF and why change what fucking worked?
It would be easier today to sell the series, based at first on domed cities against global warming, the rest sells itself if written by anyone with more than three operational brain cells.
It’s not like one is trying to write a series on Banks Culture.
John Morales says
Jack Chalker’s schtick, as I recall.
John Morales says
[As for you, wzrd1, you give me vibes of Johann (basically, Jubal from Stranger in a Strange Land) from I Will Fear No Evil. Except for the sex change bit, and the billionaire bit]
KG says
lotharloo@14,
Oh, right, pardon me. I hadn’t realised you knew more about LLMs than one of their main developers.
Robert Webster says
Style over substance, eh? So, perfect for generating political speeches.
bluerizlagirl . says
@ KG, #12:
OK, I’ll give you a scenario that fits those observations without invoking any mystery. The painting riddle was asked and answered on the Internet, and the whole conversation archived and picked up somewhere in the system’s training data, with inferences being taken from other messages that “Paint the blue rooms yellow” was considered a particularly good answer. When the LLM was asked the same question, it retrieved the “best” answer it knew — that is to say, the best one based on scores it had assigned based on existing human assessments of human-provided answers to the same question — and displayed that.
Hardly any different from asking an animal a mathematical question, having it repeat a gesture until a signal is given, giving the signal to the animal after the correct number of times and claiming the animal solved the equation. It might also be expected that at least one member of the audience would be tense until the correct answer was reached, then relieved. The animal could almost learn to pick up on this itself, over the course of several performances …..
Anyway, surely the best answer would be “Calculate how many rooms you can paint white with any white paint you may have remaining. If there will still be rooms left to paint after all the white paint is used up, begin painting up to this many rooms yellow using any yellow paint you may have remaining. In any case, whether you have run out of yellow paint, rooms to paint yellow or never had any of either at all in the first place, begin painting rooms white until you either run out of paint or run out of rooms. If you have any rooms left to paint, purchase only enough paint to paint those rooms. If and only if less than one year has elapsed since you embarked upon the project, and if and only if there is a significant cost advantage to doing so, you may substitute yellow paint in place of some or all of the white paint necessary to complete the job with” ?
xohjoh2n says
@26:
No, surely the best answer would be “I can’t be arsed with all this shit, just hire a goddamn painter and be done with it.”
lotharloo says
@25 kg:
Classic appeal to authority fallacy. Your authority has pulled out an opinion out of his ass about a system with no verifiable info on its training set.
John Morales says
lotharloo, https://www.youtube.com/watch?v=jRAAaDll34Q
lotharloo says
@30:
I’ve already used it to test most of my hand ins for Masters and Bachelor exercises and assignments. It usually does well for Bachelor stuff, fucks up half of the Masters stuff and sometimes makes up absolute nonsense with full confidence. It still gives 0 evidence that this thing understands “time”. Although to be fair, I’m not paying for 4.0 so this is just pure 3.5 although we have a PhD student who has bought the 4.0 but the difference was not that big at a few things we tested.
Most programming tasks are baindead activities anyways. Once you can create sensible dialogue, you can also create basic programs.
Kagehi says
Well, apparently Microsoft has taken the next “partial” step, not including vision, which is also being worked on, and has used a version of of ChatGPT to play Minecraft – by doing exactly what I said its missing – self reference of what it has previously tried. Most AIs that have been made to play it explicitly do something similar, but the ChatGPT solution is to generate code that tries to do X, gets back a result from the game that results in either success of failure, then tweaks the code its written, to try to get it right again. I have no idea how much “human intervention” is involved, since there even for people there are things in the game that you have to stumble across, or figure out on your own, because its unclear, even with the recipe feature, that something is possible, or how to get it, until you are interacting with the right block, but, it is a baby step. And.. this is a very young baby with a flipping thesaurus and a the equivalent of a shelf of encyclopedias.
KG says
lotharloo@29,
Given the choice between you and Geoff Hinton as authorities on what LLMs can do, I’ll go with Hinton. Appeal to authority is not a fallacy if there is good reason to believe the authority knows what they are talking about. You have given no reason at all for anyone to believe you do. In this case, we have relevant experts among whom there is no consensus (by consensus I don’t mean all such people agreeing, but that the great majority of them have agreed the question is settled and moved on). In such a situation, it’s sensible to withhold judgement.
bluerizlagirl@27,
Could be – but as you yourself point out, it wasn’t by any means the best possible answer, just one that showed an apparent understanding of temporal sequencing. In this interview, Hinton describes near the start how GPT-2 surprised him by being able to explain why (many he tried, not all) jokes are funny, and near the end (about 33 1/2 minutes in) gives an interesting example from machine translation which suggests some understanding of spatial relations. You can always posit of any particular example that the LLM got it from its training set, but I find it hard to believe that Hinton hasn’t thought of that, and come up with his test questions himself. Yes, he could be wrong – and as I said, that’s the way my intuition pushes me, but then, it would have told me that such a system couldn’t possibly write useful code, expand bullet points into coherent prose, etc. And I don’t believe the contrast of “form” and “content” (or “meaning”) is as slam-dunk a point as Emily Bender appears to think. As I asked before, where’s the “content” in patterns of electrochemical impulses?
lotharloo says
You fucking moron, you need to check for evidence first.
The training data is private. It is not disclosed. It is not public. So tell me, what is the good reason to trust your authority? It is literally just one guy’s opinion. We are no longer in 19th century when the opinions of one guy counted a lot. With no data you can shove that opinion up your ass.
ondrbak says
A different expert’s sober look at LLMs:
https://rodneybrooks.com/what-will-transformers-transform/
Dunc says
@35: That’s an interesting link, thanks for sharing it.
KG says
lotharloo@34,
Your resort to content-free invective suggests to me that you may be simply an LLM, but a rather more primitive one than ChatGPT! Either that, or you’re simply whistling in the dark.
No, it isn’t. It’s an opinion widespread among relevant experts, but not a consensus. Hence my suggestion to withhold judgement – but I’m far from conviced you have any judgement to withhold.