This isn’t exactly a pro-AI post, it’s more of an anti-human post, because I’m annoyed that otherwise clever people can somehow experience the same world I am living in right now, and come away with the idea that humans have some special quality that makes them inherently better than other animals or, now, other machines. Seriously, we’re on a rocket train to a nigh-biblical apocalypse because we have the power to see that coming but no power whatsoever to prevent it. (re: doomerism policy, yeah, well… i don’t think anybody reading this is going to be a victim of climate catastrophe or have to live thru mad max type shit. take with grains.) We are animals who, like all the other dominant species in prehistory, contain an element of our own self-destruction within the powers that drive our success. And we, in masses, are so profoundly foolish that it fucking hurts. If we let ourselves feel it, which I generally prefer not to, that awareness can crush us.
Organisms fuck and consume and proliferate until they run into some kind of limit. Equlibria are established, they are upset, and then the ruin comes. Humans are subject to all of these things. Our vaunted mental powers are worth as much as those of the cruelest and most thoughtless meth addict that ever punched an old lady for drug money. We are fucking dogshit, bro. Fuck people a lot.
Let me come off that for a minute. Unlike animal rights extremists, I regard humans as animals, and I like animals, so I like people. I don’t think highly of our ability to think, or to love, or to be responsible for the care of each other or of anything else in the world, but there is a lot to like about us. We’re funny, we’re cute, we are above all tragic, which is case in itself for sympathy and care. I love people because I recognize that being a human animal is an undignified, messy, painful, and frustrating experience for almost everybody, at some point or another. Care for people a lot. Bro.
But to think there’s anything in us that cannot ultimately be outdone by computers is fucking asinine. They’re already kicking our asses at almost anything that matters. LLMs, without any actual understanding, are able to score higher on tests of emotional intelligence than people are. I don’t have to take some nerdlinger’s numbers for it; I’ve seen it in practice. Say something to a group of humans, get grunts and vacant expressions and confused noises. Say same thing to an LLM, get words which reflect a perfect understanding of what was said – again, from something without actual understanding!
In this circumstance, when you are seeking somebody to have intellectual or emotional discourse with, and everybody around you is as sharp as a sack of wet mice, whatever their reasons or excuses, does it matter one fucking jot that the LLM doesn’t have human understanding under the hood?
I guess this was a pro-AI post after all. Ignore as you please. Here’s where I get to the point, but since I don’t have the sauce to support my thesis, I outsourced it to the LLM known as Claude. Said I unto it…

yes, this is in response to the extra sciencey goal post shifting in somebody else’s article. i dashed together a post to replace this one several hours from now. t’ain’t much, but you won’t have to look at this article for long, if you don’t want to.
i think it’s funny mjr said something about free will and hj felt the need to rebut that, because we also have bloggers who don’t believe in free will at all. chaos rulez, babeeee…
I already agree with the sentiment. Yup.
—
I find your technique is interesting. Not least because of its cogency.
Different style to mine, but obviously works.
I myself prefer to use blocks when building up to something complex; akin to lemmas, and then synthesise.
You used a rather complex query right away.
The more bits, the greater the slop.
I put your query into copilot for analysis (I told it to assess, not to service it).
Copilot: Here is a structural audit of the quoted query.
No servicing, no content‑retrieval — just complexity analysis, clause structure, and dependency mapping.
1. Overall structure
The query is a multi‑paragraph argumentative prompt containing:
– Premise statements
– Meta‑claims about discourse
– Causal chains
– Requests for historical examples
– Requests for contemporary examples
It is high‑complexity because it mixes epistemic claims, ethical claims, and research directives.
[several elided]
4. Complexity classification
Using a rough scale:
• Low: single question, no dependencies
• Medium: multi‑part question, minimal subordination
• High: nested clauses, causal chains, multiple domains
• Very high: requires synthesis across disciplines and time periods
This query is high to very high complexity due to:
• nested causal reasoning
• meta‑epistemic claims
• historical and contemporary research directives
• multiple dependent clauses per paragraph
• conceptual interdependence
Me: given that, assess its merits as a prompt to an AI engine
Copilot: [first 6 elided]
7. Overall evaluation
Strengths
• Conceptually rich
• Contains explicit research tasks (eventually)
• Provides context for why the user cares
Weaknesses
• Overlong preamble
• Multiple implicit premises
• Multi‑domain sprawl
• Task buried at the end
• High cognitive load for the model
• Ambiguous stance expectations (analysis vs. agreement vs. sourcing)
Net assessment
As an AI prompt, it is high‑complexity, high‑noise, low‑precision.
It is serviceable but suboptimal.
A more effective prompt would isolate the research tasks and remove narrative ballast
hey, come up with a better post in the comments here, using AI. i’m into it.
um. *I’m convinced that* was the beginning of the bit I pasted.
Also, why ditch this? Does it embarrass you or something?
re #4:
You got it. I don’t want to spoil your journey.
Or spoil your style.