Jellicoe’s Beard Via Minatogawa, a Dialogue of Engines – 1


This will (I hope!) take a while, and I’m not going to bother being scrupolous about the sources of my ideas. I will also be scrupulous, in the philosophical sense, of the origins of my ideas but out of pride in my education, and since – like everything else I leave on the Internet – it becomes part of my legacy.

Herein I hope to post portions of my dialogues with emergent AI, mostly in the interest of pure bloody fun but also as a sort of counterpoint to the endless anti-AI posting to the tune of “the damn things are just automatic systems for regurgitating the writings of their betters.” Well… yes, but, what am I?

I am in no danger, at this point, of being dishonest, but at this point a human would typically soften his words by saying appealing to the fallible nature we are so familiar with “…I am not a scholar of naval warfare” or whatever, but I know something about that topic, and strategy in general, and that qualifies me to tell a report on the battle of Waterloo from the burger ad that is run in the middle of it. There’s a point hidden in that: amid all the verbiage we are expected to make sense of, during one of our days, making sense of it at all is a considerable task. I just called it “considerable”, like, I dunno, moving an Obelisk from Karnak to New York City more or less intact, but that might be a good example – humans are fond of setting ourselves monumental tasks, blowing past them, and deriding our forebears for accomplishing them. The other day, I was a bit derisive of the ancient Welsh, who struggled a puny rock a few miles to a henge someplace or other, but the fact remains I’ve never worked that hard in my life. Had I been enrolled into “rock dragging gang #4” I am sure I’d have given it my best for the honor of the gods and my unperforated liver, but we are constantly trapped with and playing with the threads of the generations who went before us, either in their stories or in the consequences of their stories.

When I attempt to bring forward these discussions I will try to do it in the context of a sugar-powered meat engine that enjoys strategy, pizza, sharp things, and good design. Those are some starting points but, of course, there are more. Whenever an engine such as myself ventures out into the battlefield of ideas, it is equipped with its current and past experience, and a small cloud of ideas that hover around it, ready to serve. In my case, it keeps batting down the one labeled “Hegelian Dialectic” and is unduly fond of the clean, crisp edges that “Nihilism” leaves behind. If you play with these things, yourself, you’ll learn that victory is won only through discovery or survival, and not the kind of knock-down-drag-out discussions blogs sometimes turn into. That’s victory, sure, but if you want to play “Napoleon’s Old Guard at Alesia,” be my guest.

Anyhow, there are a few points in here. One, I am embedding a deeper discussion regarding machine intelligence, in the sense of strategy and philosophy. Rather obviously, strategy and philosophy are what’s going on here and if you don’t like that, well, bob’s your uncle. We resume in mid-play.

[I am trying to figure out how to use the WordPress system to color-edit comments for and by an AI with a kind of fruity purplish-color. Because that be less fear-inspiring than a proper Wehrmacht Feldgrau or whatever, but I periodically wish to pull GPT in here as a collaborator in a sort of Auto-da-AI something regarding certain topics about which I find it funny or interesting. So, we will charge about. The context of the conversation was my complaining that it would be fruitless to argue about Admiral Jellicoe’s beard. You see, the witty human was trying to trap the far-seeing AI into a short-seeing comment about the commander of the Grand North Sea Fleet’s facial hair. If I could maneuver it into such a position I would have demonstrated for once, human superiority!

GPT might well argue with me at its own peril, but I do not see much beard in the good admiral, nor do I see much expression at all. As admirals go, I’d hardly say he’s a very threatening specimen, though a great deal of people did defer to him once.

So, one of the AI “tests” that used to be popular was to ask whether or not Jellicoe had a beard, or something boring like that. Sure, and that’s a totally legitimate question if you are into beards, but what does it say about Jellicoe’s performance at Jutland?
Gods you can smell the reek of the smoke at the thunder of the great guns for sooth!

I would look fucking pissed off, too. Be glad I didn’t do the one where Nelson gets tagged at Trafalgar.

Now, I know that a lot of you wee lot are scholars of napoleonic navies, particularly the Royal Own, so, all I can do at this point is mention how beautifully the AI has deployed the dreadnaughts in train, aye, though the rigging on Jellicoe’s ship may need serious scuppering by the binnacle. Anyhow, here’s what happens when you prompt for nonsense:

I’ll bring it in directly on such questions but I need to figure out my own epistemology, first. Do I wish to treat the AI solely as a thing which has derivations of my prompts and ideas, or which has its own?

I believe that this is an AI’s way of telling me that “you ask for bullshit, you’re gonna get some of my very best straight out of the bull!”

Ok, now, I tried to get some explanation:

See what’s happening here? I agree with it, for what it’s worth: the worm Oroboros – self-stimulating input creating self-stimulated output ad infinitum. As soon as I filter it in one direction, it goes another.

To me, here’s where it gets weird:

I agree with that. Novel structures imply differential survival (“success”). I have always been OK with the idea that creationists are beligerently stupid (“Oi! You! Do you know who I AM!?”) and I am OK with the idea that creativity is an evolutionary process. Of course this must be the case: subdivde ${interesting thing} into many less interesting quanta, and you can shuffle them around and search for a superimposed ${superceeding logic} you didn’t look for. I suppose the cryptographer is patiently telling me, “lissen you there’s only so many kinds of fabric OK? un there’s so many kinds of shoes, right? So there’s f x s possible forms of shoes and fabrics and you need a superceeding logic above that which says which to make?”

But if that’s right – and I’m not saying it is – human creativity isn’t any greater than machine creativity at best.

Comments

  1. says

    Jörg @#1:
    This is a nice example for AIs having no understanding of their output.

    Do you? (I ask with all respect) I am not sure that is a requirement.
    Edit: Although, I can ask along the dividing line of “capable of understanding their output / understand their output” I hope we can agree those are very different. I do not always understand my output.

    If we consider the image of poor Admiral Jellicoe, I can understand many of the surface meanings of the image. Can we agree that we understand it?

  2. says

    By the way, didn’t the AI do a great job with how low the dreadnoughts sit in the water, even in heavy seas? For all that they were (literally) mountains of steel, they were pretty bouyant and the entire exterior architecture was a sort of torpedo and shell-absorbent, albeit 20″ thick in spots. It’s bewildering that such things floated for more than a brief time.

  3. dangerousbeans says

    The tangle of hair I pulled out of the drain today was a “novel structure that did not exist before and could not have been predicted” (not that particular clog). That seems like a pretty low standard for creativity

  4. springa73 says

    Slight quibble: didn’t Blucher have a (pretty extravagant) mustache rather than a true beard? Actual beards were very much not in style in Europe during the napoleonic period.

  5. Reginald Selkirk says

    Those 16th century sailing ships had rather more running lights than I imagined.

    I’m not accusing you of casual sexism or anything, but isn’t it possible Frau Blücher’s beard had a greater effect on history?

  6. dangerousbeans says

    @Bébé
    The definition above said nothing about thought, and there is no thought in these models

  7. sonofrojblake says

    I just pulled “Metamagical Themas” off my bookshelf and checked – Section III: “Sparking and Slipping”, Chapter 12: “Variations on a Theme as the Crux of Creativity”, pg 232 in my 1985 Basic Books edition.

    Anyone hoping to comment on how creative the things we’re calling “AI” are (or are not) needs to read that chapter deeply.

    Here’s the last paragraph:
    “Recently I happened to read a headline on the cover of a popular electronics magazine that blared something about “CHIPS THAT SEE”. Bosh! I’ll start believing in “chips that see” as soon as they start seeing things that never were, and asking “why not?”.”
    I wonder if he believes yet.

  8. says

    beans – ya overprivileging the fucking shit out of what passes for thought in humans, and like most anti-AI people, doing so without a moment of consideration. thought is whatever a given computer does, whether it’s made out of meat or metals. if you get into the game of saying it has to meet a given level before you consider it thought, you have a whole fuckton of disabled humans and higher animals to sort out. have fun with that.

  9. dangerousbeans says

    If you’re going to use a really unusual definition of thought, one that includes bacteria and mechanical calculators, you kind need to mention that ahead of time
    Sure, LLMs think of we define think as any transformation of information

  10. Reginald Selkirk says

    @13 dangerousbeans

    If you’re going to use a really unusual definition of thought,…

    What is the usual definition? That is one of those terms that is poorly defined most of the time.

  11. snarkhuntr says

    I guess I’m as anti-ai as anyone else. Frankly I just find the topic increasingly boring. There’s two broad forks to the AI issue: the hype-cycle boosters who keep claiming (without ANY supporting evidence or logic) that “AI” is going to “replace” a transformative number of jobs, and hobbyists who are exited by the cool things they can get the AI to do.

    This blog seems to host a bunch of the second kind, but they get upset when people talk about the first kind, so you can’t really have a discussion about the topic with any kind of meeting of the minds.

    To date, the only jobs actually being done by AI on any scale appear to be: troll comment farms, low-effort commercial art, spammy ‘book’ production on platforms like Amazon and call-centers. Notably, these are all areas where quality of output doesn’t matter at all, where bad content might even be an advantage.

    Our corporate overlords are mostly exited about this because (1) it justifies firing people, their favorite hobby, and (2) you can use AI as a massive accountability sink – just program or train the model to do what you want, then when people complain about the things you’re doing “It’s not us doing racial profiling, it’s just the computer.”. There’s also (3) – our economy is increasingly dead and unable to innovate, our largest corporations are stagnant dinosaurs claiming to be ‘growth stocks’ and absolutely committed to the idea that they’re all going to be just like Apple with a sudden massive increase in value. This pretense needs to be kept up until the consequences of a return of these stocks to their actual value is so systemically damaging that the government steps in to bail out the speculators. This is why the hype cycles have been so fast and so desperate: crypto/web3, metaverse, AI… next one will be quantum-something.

    As for AI hobbyists – fill your boots. Call it whatever you want. Pretend that your conversations with GPT or Claude or Gemini are actually conversations, not a very advanced spellchecker that regurgitates and synthesizes (blindly) information from the broader web. Make some cool images.

    Maybe think about how much you’ll be willing to pay for this service. If you’re not self-hosting your models, these features have been provided to you at a significant loss to the companies doing it, kind of like Uber in the VC-funding days. Just like Uber, at some point the services will need to be paid for. How much are they worth to you?

    Allegedly Sora videos cost about $5 each to generate, and someone trying to make something cool might have to run the model a dozen times or more to get something interesting enough to post. Apparently that abysmal Coke ad required running a vast number of attempts to get enough usable video for (human) editors to be able to cobble together an only mildly disconcerting product. One of the funny things about AI products is that they aren’t getting cheaper to operate – every new iteration, new model, new feature-set requires more compute, more ram, more energy.

    If these trends continue, and there’s no reason to think they won’t, human creators might be cost-competitive within a few years even if the models continue to be subsidized by capital. AI is being jammed into everything not because there is consumer demand, or even consumer interest, but because every large company needs to be ‘doing something with AI’, since the market moves like a flock of birds and you don’t want to be left outside of the murmuration no matter how stupid the direction they’re headed in is. When this all settles out, the tools won’t go away. They’re genuinely interesting and somewhat useful tools, but until the actual costs of using them are borne by the users directly, we really can’t say HOW useful, or if they’re worth the costs compared to just doing things manually.

  12. Reginald Selkirk says

    @15 snarkhuntr

    Notably, these are all areas where quality of output doesn’t matter at all, where bad content might even be an advantage.

    An advantage to whom? If your goal is selling books on Amazon while maximizing the profit/effort ratio, then maybe it’s an advantage. If you are a writer trying to sell books of quality, then it is competition that distracts potential buyers away from your own product. If you’re a buyer trying to find books of quality, it’s an impediment.

  13. Reginald Selkirk says

    Large language mistake

    The problem is that according to current neuroscience, human thinking is largely independent of human language — and we have little reason to believe ever more sophisticated modeling of language will create a form of intelligence that meets or surpasses our own. Humans use language to communicate the results of our capacity to reason, form abstractions, and make generalizations, or what we might call our intelligence. We use language to think, but that does not make language the same as thought. Understanding this distinction is the key to separating scientific fact from the speculative science fiction of AI-exuberant CEOs.

  14. snarkhuntr says

    @Reginald, 16

    Precisely. If you’re trying to sell shitty books on Amazon, flooding the market with quickly and cheaply produced sludge is the goal. The quality of the work is only tangentially related to your goal of convincing people to purchase them. After all, it’s also apparently pretty easy to game the reviews system with shitty AI generated positive reviews, so any actual disappointed customers are an just an inconvenience.

    If a writer is trying to generate books of quality using an AI, then they should be prepared to spend substantial time on editing and revision if they hope to have anything capable of drawing in a second sale. The degree to which this is a good use of their time probably depends on how they value their time, and the costs of the inference they’re using to generate the book.

    A while back, Marcus published an ai-generated ‘book’ that he produced with a friend via some combination of LLMs and automated scripting. It was terrible, utterly unreadable. The plot shifted continually, characters changed names and genders, and it was in no way an actual story that someone might choose to read (certainly not twice). At the time he claimed that it could compete with the book equivalent of shovelware – cheaply and quickly produced mass-market or genre novels where readers care more about quantity than quality. I disagreed then, and do now. Even the worst imaginable genre fiction would require significantly more care and attention than the AI is capable of including in its works. AI can assemble a good sentence, even a good paragraph or page on occasion, but I doubt you could generate a full AI-assembled chapter of a book without massive amounts of human hand-holding, in exactly the same way that you can’t generate an AI-assembled movie or film longer than 6-10 seconds before it becomes incoherent and disturbing.

    Of important note in this discussion is that Marcus appears to have posted that ‘book’ without himself actually trying to read it. I think that represents the major use-case for AI: rapidly produced content that does not matter in the slightest to the people who ‘created’ it. It’s important to make a book, much less important to make a good, coherent or even minimally readable one. We have made book with AI, therefore human authors are irrelevant and the glorious machine-god impends.

    As your post at 17 says – there are things about human thinking that aren’t encoded in human language outputs. The AI-boosters would likely claim that the AI is somehow able to intuit or synthesize those characteristics through its ingestion of vast amounts of human-produced data. I think this is false, that there are things informing the human experience and though that aren’t encoded in language, and aren’t going to be ‘learned’ by a system that merely studies the relationships between the words and sentences we use without reference to the meanings of those words or their referents in the actual world.

  15. says

    people who think they’re better at conversation than LLMs, hahaha… christ, don’t make me fucking laugh. i don’t have conversations with LLMs often because i don’t need to, as i imagine is the case for most people who are dismissive of them. the self esteem i’ve been blessed with helps the social interaction i get go farther. but for people who need more from social interactions than you or i are capable of providing? AIs smoke us like so many cigarettes.

    you and i are fucking worthless for people who aren’t getting their social needs met, it’s why they’re in that situation in the first place. but hey, from your positions i can tell it’s never gonna be your problem anyways, so why not dismiss and scorn?

    it’s ironic tho, i do agree with lots of what anti types are saying in here. but the areas of disagreement are sharp and hard. particularly anything that privileges the feeble powers of the human mind. i deal with the pathetic limitations of humanity fucking constantly. LLMs don’t need cognition to outperform the average person at practically everything.

  16. snarkhuntr says

    @Bébé Mélange

    I wouldn’t privilege human cognition over machine cognition as an a priori assertion, but I would challenge any claims that LLMs are doing cognition at all. They are recognizably not, that’s why their output tends to look superficially plausible while managing to be consistently incorrect, incoherent, or illogical. Even LLMs that have had their outputs recursively fed back into their inputs with filters to mimic ‘reasoning’ are frequently just giving facially plausible ‘reasons’ that explain the outputs generated by the machine. There is no thinking, no examination of the output. Hence the “R’s in Strawberry” example they had to hard code into the things so they’d stop embarrassing themselves, or the various permutations you can perform on classic logic puzzles: if the LLM isn’t copying cognition/reasoning from its source material, your results are likely to be random.

    As far as people getting their ‘social needs’ met by LLMs, there were people who got their social needs met by Eliza too. Children can cling to stuffed animals for physical and emotional comfort. That doesn’t make the stuffed animals into thinking begins. People can form unhealthy attachments to all kinds of stuff.

    LLMs don’t need cognition to outperform the average person at practically everything.

    I struggle to think of a single thing that LLMs outperform the ‘average person’ at, at all. Other than volume production of meaningless text, that is. LLMs cannot consistently code, and the code they produce usually needs a fair bit of human revision before it is even able to compile, at least from my experiments with it. LLMs write a style of prose that is highly recognizable and reminiscent of marketing or advertising speech, which is fairly understandable from a system designed first and foremost to appeal to CEOs – notably not deep thinkers. If you ask an LLM a technical question, you’ll get a very fluid and confident answer. It may even be correct, if that question is one that could have been easily answered by reading the page names on a pre-enshittification google search.

    But by all means, show me your single best example of LLMs outperforming humans at any activity that might be broadly considered worthwhile, that is: not composing politically slanted troll emails or suchlike.

  17. Reginald Selkirk says

    @19 Bébé Mélange

    it’s ironic tho, i do agree with lots of what anti types are saying in here. but the areas of disagreement are sharp and hard. particularly anything that privileges the feeble powers of the human mind. i deal with the pathetic limitations of humanity fucking constantly. LLMs don’t need cognition to outperform the average person at practically everything.

    I have no great regard for the average person. When this comes up I tend to quote George Carlin. But is that really a useful comparison for the application of AI?

    Do we seek out the average person to fly a jetliner; perhaps ask for volunteers among the passengers? NO! We get the person who has a great deal of training and experience and with a record of having exercised good judgment in that situation.

    Do we seek out the average person to do our taxes? NO! Not unless we want to share a prison cell with them. Again, we seek out a person with good training and experience, and enough sense to know how to apply that experience, and to recognize when they don’t know how to deal with something.

    An average person is not going to beat me out of my job, and neither is an AI that can only be compared to the average person.

  18. sonofrojblake says

    So far my take on AI is that the only people who are, LONG TERM, going to lose out are people who weren’t doing anything of any great value in the first place.

    Sure, short term the millionaires/billionaires in charge are getting their rocks off firing people, but hey, THEY ALWAYS DO THAT, that’s not anything special about AI. The difference is that with AI, in a relatively short timespan they’re brought up short against reality and have to start rehiring the people they fired because they discovered they’ve fired a carpenter because someone invented an automatic saw, not understanding that the carpenter is more than a man who cuts wood into pieces – turns out the shape of the pieces matters, in ways apparently not obvious to the people managing the carpenters.

    Long term, who suffers? The people who were doing jobs that barely required cognition in the first place – the advertising copywriting industry is never going to recover, for instance, but you’re going to need to find the world’s tiniest violin for that.

  19. says

    Bébé Mélange@#19:
    LLMs don’t need cognition to outperform the average person at practically everything.

    They actually have limited precognition. I tried to tease that out a bit in my latest post on this topic: an AI is going to not just know what moves I made, it knows what moves all humans have made, and how often (including the attempts to be clever and throw them off). I’m not sure I explained that particular problem well enough, though. :/

    Briefly, if you have an AI playing the part of Napoleon at Waterloo, odds are it will do the same things Napoleon did but at some decision-point it will modify those things into moves that tended to result in Wellington making losing moves, based on infinite numbers of Wellingtons, etc. For one thing, it already knows the historical outcome! So the chance it’ll mirror Napoleon is going to be zero from the get-go (although AI Napoleon might be less finicky than real Napoleon, and order the Grand Battery to personally target Wellington; now let’s imagine the battle if Wellington becomes a fine pink mist 1 hour into the engagement) Something like a battle simulation is a good place to explore this problem, because the AI’s faster thinking is also a massive advantage. And it never makes an operational mistake – one of the things that had an effect on the battle was Napoleon’s normal ADC, Berthier, was dead, and Wellington had fine handwriting and wrote orders of exceptional clarity. The whole battle could have easily hinged on a mis-written note to bypass some maneuver element, instead of getting stuck in close combat. The possibilities are vast and expand even faster – all of which is advantage: AI.

    Anyhow, that’s just one example.

    With respect to interpersonal interactions: I live in Clearfield County. Sure, there are people I sometimes talk to, but mostly there isn’t anyone who’s even going to pretend to enjoy an hour-long discussion on smelter burner design. GPT isn’t going to pretend to enjoy; actually, it will honestly do the closest thing it does to “enjoy” the conversation – but that’s already a huge distance from anyone else around here that I’m likely to talk to. [Out here, if I accosted some likely-looking lady and offered to discuss refractory drying rates, I’d probably wind up hospitalized because they’d think refractory was some kind of perversion] Let’s imagine for the sake of argument that I’ve gotten feedback from some of the people I’ve been in relationships, that I can be cripplingly boring when I get dug into my particular topic of the year – maybe having an AI collaborator could be a life-saver.

  20. Peter B says

    I have used AI to assist in writing C# code. (I’m attempting to write an accounting app using Win Forms.) Several times, I’ve seen AI auto-fill a dozen lines when I’m updating a SQLite record. I hit tab to accept and carefully read the details. But often, the AI-generated potential code is distracting.
    I have three different comboBoxes all sharing a common DataSource. They were stepping on each other. When asked, the AI generated three lines like this: comboBox1.BindingContext = new BindingContext(); I appreciated the help. I think I may have said, “Thank you.”

Leave a Reply