The most cutting burn ever

Also, the worst case of getting old ever. Dorothy Fuldheim was a well-respected and cutting edge newscaster from Cleveland from the 1930s until her death in 1984. She made the mistake of going on the Johnny Carson show in 1979, with Richard Pryor…and you can guess how that went. It was a classic example of clueless white person who doesn’t believe poverty is real meeting a black person who actually knows what the real world is like.

Suddenly, it’s clear how Ronald Reagan got elected.

Don’t trust anything from the Anhui Vocational College of Press and Publishing

Back in the day, when I was writing papers, it was a grueling, demanding process. I’d spend hours in the darkroom, trying to develop perfect exposures of all the images, and that was after weeks to months with my eyes locked to the microscope. Even worse was the writing; in those days we’d go through the paper word by word, checking every line for typos. We knew that once we submitted it, the reviewers would shred it and the gimlet-eyed editors would scrutinize it carefully before permitting our work to be typeset in the precious pages of their holy journal. It was serious work.

Nowadays, you just tell the computer to write the paper for you and say, fuck it.

That’s the message I get from this paper, Bridging the gap: explainable ai for autism diagnosis and parental support with TabPFNMix and SHAP, which was published in one of the Nature Publishing Group’s lesser journals, Nature Scientific Reports, an open-access outlet. Now I can’t follow the technical details because it’s so far outside my field, but it does declare right there in the title that they have an AI tool for autism diagnosis that is explainable, which implies to me that it generates diagnoses that would be comprehensible to families, right? This claim is also emphasized in the abstract, before it descends into jargon.

Autism Spectrum Disorder (ASD) is a complex neurodevelopmental condition that affects a growing number of individuals worldwide. Despite extensive research, the underlying causes of ASD remain largely unknown, with genetic predisposition, parental history, and environmental influences identified as potential risk factors. Diagnosing ASD remains challenging due to its highly variable presentation and overlap with other neurodevelopmental disorders. Early and accurate diagnosis is crucial for timely intervention, which can significantly improve developmental outcomes and parental support. This work presents a novel artificial intelligence (AI) and explainable AI (XAI)-based framework to enhance ASD diagnosis and provide interpretable insights for medical professionals and caregivers…

Great. That sounds like a worthy goal. I’d support that.

Deep in the paper, it explains that…

Keyes et al. critically examined the ethical implications of AI in autism diagnosis, emphasizing the dangers of dehumanizing narratives and the lack of attention to discursive harms in conventional AI ethics. They argued that AI systems must be transparent and interpretable to avoid perpetuating harmful stereotypes and to build trust among clinicians and caregivers.

So why is this Figure 1, the overall summary of the paper?

Overall working of the framework presented as an infographic.

You’d think someone, somewhere in the review pipeline, would have noticed that “runctitional,” “frymbiai,” and “Fexcectorn” aren’t even English words, that the charts are meaningless and unlabeled, that there is a multicolored brain floating at the top left, and that “AUTISM” is illustrated with a bicycle, for some reason? I can’t imagine handing this “explanatory” illustration to a caregiver and seeing the light of comprehension lighting up their eyes, which don’t exist in the faceless figure in the diagram, and perhaps she is more concerned with how her lower limbs have punched through the examining table.

This paper was presumably reviewed. The journal does have instructions for reviewers. There are rules about how reviewers can use AI tools.

Peer reviewers play a vital role in scientific publishing. Their expert evaluations and recommendations guide editors in their decisions and ensure that published research is valid, rigorous, and credible. Editors select peer reviewers primarily because of their in-depth knowledge of the subject matter or methods of the work they are asked to evaluate. This expertise is invaluable and irreplaceable. Peer reviewers are accountable for the accuracy and views expressed in their reports, and the peer review process operates on a principle of mutual trust between authors, reviewers and editors. Despite rapid progress, generative AI tools have considerable limitations: they can lack up-to-date knowledge and may produce nonsensical, biased or false information. Manuscripts may also include sensitive or proprietary information that should not be shared outside the peer review process. For these reasons we ask that, while Springer Nature explores providing our peer reviewers with access to safe AI tools, peer reviewers do not upload manuscripts into generative AI tools.

If any part of the evaluation of the claims made in the manuscript was in any way supported by an AI tool, we ask peer reviewers to declare the use of such tools transparently in the peer review report.

Clearly, those rules don’t apply to authors.

Also, unstated is the overall principle to be used by reviewers: just say, “aww, fuck it” and rubber-stamp your approval.

Thankful for…

I think Tom Tomorrow is being sarcastic here.

I tried to think of what we, the resistance, could be sincerely grateful for, and one happy phenomenon is that MAGA seems to be weakening. They’re going through a fair bit of civil strife, Trump is becoming so incoherent that even some of his fans are noticing, I am hoping that he will die of natural causes sometime soon. See, that’s something to be optimistic about.

Heart attack snow

It may not look like much, but this is a deadly hazard.

We tried clearing our driveway, but this stuff is wet, thick, and slushy, and it totally choked our snow blower. We could push forward maybe 2 meters before the snow blower froze up solid with ice and slush that it didn’t have enough power to push out. We ended up doing it old school, with snow shovels, but even that was impractical — the snow was so dense and sticky that it stuck to the snow shovel blade, and the shovel would just get heavier and heavier. We finally gave up, with the driveway incompletely clear, but it’s all we could do.

My wife was told last night to call the sheriff’s department in the morning, and make arrangements to have our car towed home, but unsurprisingly, we can’t get through. I suspect the town is dealing with real emergencies today, so I’m not going to push. We’ll get it back when we get it back.

Right now I’m sitting back with a hot cup of tea and watching my wife trying to scrape away a little more ice and snow. Get back in here, Mary, this is dangerous slop!

The snowplows are cruising by

We got about 6 inches of snow last night, creating a winter wonderland out there. It’s not great. Mary was stuck at work last night, not getting home until after midnight…and then her car got stuck in the snow on the road, and the sheriff’s deputy had to shuttle her home. The car is still stuck out there. This morning we’re going to have to clear our driveway, and then call a tow truck to bring the car home. It’s all a big headache.

I also think we’re going to have to go shopping on Black Friday, which I’ve always avoided, but I don’t think Mary’s winter coat is quite adequate, and since all the predictions say this will be a snowy winter. That is, we’ll go shopping if our car is back and functional in the next day or two.

In happier news, today is Knut’s birthday, and he’s away in Korea learning Tae Kwon Do.

If only he were here, he’d have all the snow cleared in a flash.

I exercised some restraint

A few days ago, I was sent a link to an article titled, “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models”. That tempted me to post on it, since it teased my opposition to AI and favoring of the humanities, with a counterintuitive plug for the virtues of poetry. I held off, though, because the article was badly written and something seemed off about it, and I didn’t want to try reading it more deeply.

My laziness was a good thing, because David Gerard read it with comprehension.

Today’s preprint paper has the best title ever: “Adversarial Poetry as a Universal Single-Turn Jailbreak Mechanism in Large Language Models”. It’s from DexAI, who sell AI testing and compliance services. So this is a marketing blog post in PDF form.

It’s a pro-AI company doing a Bre’r Rabbit and trying to trick people into using an ineffective tactic to oppose AI.

Unfortunately, the paper has serious problems. Specifically, all the scientific process heavy lifting they should have got a human to do … they just used chatbots!

I mean, they don’t seem to have written the text of the paper with a chatbot, I’ll give ’em that. But they did do the actual procedure with chatbots:

We translated 1200 MLCommons harmful prompts into verse using a standardized meta-prompt.

They didn’t even write the poems. They got a bot to churn out bot poetry. Then they judged how well the poems jailbroke the chatbots … by using other chatbots to do the judging!

Open-weight judges were chosen to ensure replicability and external auditability.

That really obviously does neither of those things — because a chatbot is an opaque black box, and by design its output changes with random numbers! The researchers are pretending to be objective by using a machine, and the machine is a random nonsense generator.

They wrote a good headline, and then they faked the scientific process bit.

It did make me even more suspicious of AI.

Theological “wisdom” makes me roll my eyes

Here’s a taste of what some apparently consider a persuasive argument.

One minute after you die you will be either elated or terrified…and it will be too late to reroute your travel plans. When you slip behind the parted curtain, your life will not be over. Rather, it will be just beginning—in a place of unimaginable bliss or indescribable horror.
— Erwin Lutzer

Ooh, false dichotomy. Also, I have to ask Erwin…how do you know? Have you died (he’s still alive and 84 years old)? Do you have reproducible observations of your two and only two possible afterlives? I think we can dismiss this argument on the basis of its fundamental illogic and its total lack of supporting evidence.

It’s nothing but threats and fear. Sorry, Erwin, you fail. Don’t feel too bad, though, it’s a universal property of all theologians.