The AI hype machine might be in trouble

David Gerard brings up an interesting association: the crypto grifters, as their scam begins to disintegrate, have jumped ship to become AI grifters.

You’ll be delighted to hear that blockchain is out and AI is in:

It’s not clear if the VCs actually buy their own pitch for ChatGPT’s spicy autocomplete as the harbinger of the robot apocalypse. Though if you replaced VC Twitter with ChatGPT, you would see a significant increase in quality.

Huh. Interesting. I never trusted crypto, because everyone behind it was so slimy, but now they’re going to slime the AI industry.

Also interesting, though, is who isn’t falling for it. Apple had a recent shindig in which they announced all the cool shiny new toys for the next year, and they are actively incorporating machine learning into them, but they are definitely not calling it AI.

If you had watched Apple’s WWDC keynote, you might have realized the lack of mention of the term “AI”. This is in complete contrast to what happened recently at events of other Big Tech companies, such as Google I/O.

It turns out that there wasn’t even a single mention of the term “AI”. No, not even once.

The technology was referred to, of course, but always in the form of “machine learning” — a more sedate and technically accurate description.

Apple took a different route and instead of highlighting AI as the omnipotent force, they pointed to the features that they’ve developed using the technology. Here’s a list of the ML/AI features that Apple unveiled:

  • Improved Autocorrect on iOS 17: Apple introduced an enhanced autocorrect feature, powered by a transformer language model. This on-device machine learning model improves autocorrection and sentence completion as users type.
  • Personalized Volume Feature for AirPods: Apple announced this feature that uses machine learning to adapt to environmental conditions and user listening preferences.
  • Enhanced Smart Stack on watchOS: Apple upgraded its Smart Stack feature to use machine learning to display relevant information to users.
  • Journal App: Apple unveiled this new app that employs on-device machine learning to intelligently curate prompts for users.

    3D Avatars for Video Calls on Vision Pro: Apple showcased advanced ML techniques for generating 3D avatars for video calls on the newly launched Vision Pro.

  • Transformer-Based Speech Recognition: Apple announced a new transformer-based speech recognition model that improves dictation accuracy using the Neural Engine.
  • Apple M2 Ultra Chip: Apple unveiled this chip with a 32-core Neural Engine, which is capable of performing 31.6 trillion operations per second and supports up to 192GB of unified memory. This chip can train large transformer models, demonstrating a significant leap in AI applications.

Unlike its rivals, who are building bigger models with server farms, supercomputers, and terabytes of data, Apple wants AI models on its devices. On-device AI bypasses a lot of the data privacy issues that cloud-based AI faces. When the model can be run on a phone, then Apple needs to collect less data in order to run it.

It also ties in closely with Apple’s control of its hardware stack, down to its own silicon chips. Apple packs new AI circuits and GPUs into its chips every year, and its control of the overall architecture allows it to adapt to changes and new techniques.

Say what you think of Apple as a company, but one thing they know how to do is make money. Lots of money. They also have first-rate engineers. Apparently they are smart enough to not fall for the hype.

Only Apple could pull this off

Apple unveiled a shiny new gadget today: Apple Vision Pro.

This looks really good! I want one. But as the summary of the glorious widget went on, it was clear I was not in their market. It’s a complete wearable computer, with a whole new interface — it’s everything Microsoft and all those cyberpunk authors dreamed of, integrating the real world (it’s transparent) with virtual reality. As I listened to the WWDC presentation, though, every glowing adjective and every new tech toy built into it made me cringe. The price was climbing by the second. Then at the end, they broke the news: $3500. Nope, not for me. It’s about what we ought to expect in something so shiny and new and packed with every bit of advanced technology they could pack into an extremely small space, though.

That price is not going to stop Apple, I’m sure. This is going to be the new must-have technological marvel that every techbro and marketingbro and rich person with ludicrous amounts of surplus wealth is going to want. Apple is going to clean up, I predict.

The good little robot

Look at that thing. It’s beautiful.

That’s Ingenuity, the drone that was sent to Mars on the Perseverance mission. It was intended to be a proof-of-concept test, expected to fly for only a couple of excursions, and then fail under the hellish Martian conditions. Instead, it has survived for two years.

Ingenuity defied the odds the day it first lifted off from Martian soil. The four-pound aircraft stands about 19 inches tall and is little more than a box of avionics with four spindly legs on one end and two rotor blades and a solar panel on the other. But it performed the first powered flight by an aircraft on another planet — what NASA billed a “Wright brothers moment” — after arriving on Mars in April 2021.

It’s made over 50 flights. Apparently it’s a bit wonky, losing radio connection to the rover when it flies out of line of sight, or when the cold shuts it down, but when it warms up, or the rover drives closer, it gets right up again.

NASA has still got good engineering. It might be because of all the redundancy they build into every gadget — this little drone cost $80 million dollars! — but I have a hypothesis that the real secret to its success is what they left out. There’s no narcissistic and incompetent billionaire attached to the project, just a lot of engineers who take pride in their work.

The problem isn’t artificial intelligence, it’s natural stupidity

A Texas A&M professor flunked all of his students because ChatGPT told him to.

Dr. Jared Mumm, a campus rodeo instructor who also teaches agricultural classes,

He legitimately wrote a PhD thesis on pig farming, but really — a “rodeo instructor”? I guess that’s like the coaches we have working in athletic programs at non-Ag colleges.

sent an email on Monday to a group of students informing them that he had submitted grades for their last three essay assignments of the semester. Everyone would be receiving an “X” in the course, Mumm explained, because he had used “Chat GTP” (the OpenAI chatbot is actually called “ChatGPT”) to test whether they’d used the software to write the papers — and the bot claimed to have authored every single one.

“I copy and paste your responses in [ChatGPT] and [it] will tell me if the program generated the content,” he wrote, saying he had tested each paper twice. He offered the class a makeup assignment to avoid the failing grade — which could otherwise, in theory, threaten their graduation status.

Wow. He doesn’t know what he’s doing at all. ChatGPT is an artificial expert at confabulation — it will assemble a plausible-sounding mess of words that looks like other collections of words it finds in its database, and that’s about it. It’s not TurnItIn, a service professors have been using for at least a decade that compares submitted text to other texts in it’s database, and reports similarities. ChatGPT will happily make stuff up. You can’t use it the way he thinks.

Mumm was unwarrantedly aggressive in his ignorance.

Students claim they supplied him with proof they hadn’t used ChatGPT — exonerating timestamps on the Google Documents they used to complete the homework — but that he initially ignored this, commenting in the school’s grading software system, “I don’t grade AI bullshit.” (Mumm did not return Rolling Stone‘s request for comment.)

Unfortunately for him, Mumm was cursed with smarter spectators to his AI bullshit. One of them ran Mumm’s PhD thesis through ChatGPT in the same inappropriate, invalid way.

In an amusing wrinkle, Mumm’s claims appear to be undercut by a simple experiment using ChatGPT. On Tuesday, redditor Delicious_Village112 found an abstract of Mumm’s doctoral dissertation on pig farming and submitted a section of that paper to the bot, asking if it might have written the paragraph. “Yes, the passage you shared could indeed have been generated by a language model like ChatGPT, given the right prompt,” the program answered. “The text contains several characteristics that are consistent with AI-generated content.” At the request of other redditors, Delicious_Village112 also submitted Mumm’s email to students about their presumed AI deception, asking the same question. “Yes, I wrote the content you’ve shared,” ChatGPT replied. Yet the bot also clarified: “If someone used my abilities to help draft an email, I wouldn’t have a record of it.”

On the one hand, I am relieved to see that ChatGPT can’t replace me. On the other hand, there is an example of someone who thinks it can, to disastrous effect. Maybe it could at least replace the Jared Mumm’s of the world, except I bet it sucks at bronco bustin’ and lassoing calves.

The triumph of form over content

That’s all ChatGPT is. Emily Bender explains.

When you read the output of ChatGPT, it’s important to remember that despite its apparent fluency and despite its ability to create confident sounding strings that are on topic and seem like answers to your questions, it’s only manipulating linguistic form. It’s not understanding what you asked nor what it’s answering, let alone “reasoning” from your question + its “knowledge” to come up with the answer. The only knowledge it has is knowledge of distribution of linguistic form.

It doesn’t matter how “intelligent” it is — it can’t get to meaning if all it has access to is form. But also: it’s not “intelligent”. Our only evidence for its “intelligence” is the apparent coherence of its output. But we’re the ones doing all the meaning making there, as we make sense of it.

I think we know this from how we learn language ourselves. Babies don’t lie there with their eyes closed processing sounds without context — they are associating and integrating sounds with a complex environment, and also with internal states that are responsive to external cues. Clearly what we need to do is imbed ChatGPT in a device that gets hungry and craps itself and needs constant attention from a human.

Oh no…someone, somewhere is about to wrap a diaper around a server.

Another reason I won’t get Neuralink

I was wondering what Neuralink is good for — it must be for treating some serious medical condition, since it involves serious surgery. But no! It’s just techdude fantasies.

Neuralink’s BCI will require patients to undergo invasive brain surgery. Its system centers around the Link, a small circular implant that processes and translates neural signals. The Link is connected to a series of thin, flexible threads inserted directly into the brain tissue where they detect neural signals.

Patients with Neuralink devices will learn to control it using the Neuralink app. Patients will then be able to control external mice and keyboards through a Bluetooth connection, according to the company’s website.

An app. Bluetooth. Controlling computer mice.

It absolutely did not help that I am currently using a computer mouse, a cheap wired optical mouse, that has an intermittent fault. Every once in a while, but not often enough to motivate me to get a replacement, the LED cuts out and the buttons stop responding. The fix is to shake the cable or unplug and re-insert the USB cable. It’s a bit annoying, I really should just get a new mouse, they’re only about $7.

But now imagine that your Neuralink device has a less than perfect connection: scar tissue builds up, an electrode gets jostled out of position. Every once in a while, the app drops the Bluetooth connection. The artificial limb you’re controlling becomes unresponsive, or even worse, you miss a kill shot in Call of Duty (worse, because I’ve seen how gamers can explode in fury at the most trivial stuff). There’s no easy cable-jiggling you can do, you’re going in for major brain surgery.

Or more likely, you’ll make do as I am with my mouse…you let it slide, 99% function is good enough. The only thing is, your brain doesn’t like wires stuck in it — there will be a gradual accumulation of scar tissue and localized damage, the performance of the device will inevitably incrementally deteriorate, and Neuralink doesn’t have a good replacement strategy.

“Right to repair” acquires a new urgency when it’s a gadget imbedded in your brain. Musk doesn’t seem the type to allow outsourcing of his profitable toy, and is probably anticipating making lots of money from obsolescence.

There’d have to be something wrong with your brain to sign up for a Neuralink trial

Has anybody read The Terminal Man by Michael Crichton? It’s about a man who gets a brain implant to correct his epilepsy, but then it starts triggering increasingly violent crimes. I strongly dislike everything Crichton ever wrote — he was a Luddite who doesn’t know what he’s talking about, while the press and the public fawn over his bad science — but for the first time, I feel like he might have been onto something.

Reportedly, Elon Musk has gotten FDA approval to stick chronic electrodes into people’s brains. Why you’d want anything associated with that incompetent boob permanently wired into your brain is a mystery.

The FDA acknowledged in a statement that the agency cleared Neuralink to use its brain implant and surgical robot for trials on patients but declined to provide more details.

Neuralink and Musk did not respond to Reuters requests for comment.

The story has triggered my internal Michael Crichton and now I’m wondering what horror will result from this decision.

  • Patients will start murdering people ala The Terminal Man (or Musk’s self-driving software) as Neuralink misfires.
  • Neuralink will catch fire and burn down to the patient’s basicranium.
  • Neuralink will explode when it’s switched on, cratering the patient’s head.
  • Neuralink will attract Nazis who will fill the patient’s brain with bad ideas.
  • Neuralink will do nothing at all, but it will distract the patient from investing in better treatments.

My imagination fails. You’ll have to think of all the likely horrible consequences of getting a Neuralink implant.

I agree with Blake Stacey

This is also what I think of chatGPT.

I confess myself a bit baffled by people who act like “how to interact with ChatGPT” is a useful classroom skill. It’s not a word processor or a spreadsheet; it doesn’t have documented, well-defined, reproducible behaviors. No, it’s not remotely analogous to a calculator. Calculators are built to be *right*, not to sound convincing. It’s a bullshit fountain. Stop acting like you’re a waterbender making emotive shapes by expressing your will in the medium of liquid bullshit. The lesson one needs about a bullshit fountain is *not to swim in it*.

“Oh, but it’s a source of inspiration!”

So, you’ve never been to a writers’ workshop, spent 30 minutes with the staff on the school literary magazine, seen the original “You’re the man now, dog!” scene, or had any other exposure to the thousand and one gimmicks invented over the centuries to get people to put one word after another.

“It provides examples for teaching the art of critique!”

Why not teach with examples, just hear me out here, by actual humans?

“Students can learn to write by rewriting the output!”

Am I the only one who finds passing off an edit of an unattributable mishmash as one’s own work to be, well, flagrantly unethical?

“You’re just yelling at a cloud! What’s next, calling for us to reject modernity and embrace tradition?”

I’d rather we built our future using the best parts of our present rather than the worst.

I’m going to call it a bullshit fountain from now on.

Highways are already scary, self-driving cars won’t help

An amusing anecdote: an engineer is out with the family of a man she was dating, and the father tried to turn on the full self-driving option of his Tesla, so she’s practically clawing her way out of the car.

But on the way back his dad started asking me “you work on self driving cars, yeah?” (I do, I’m a systems engineer and have job hopped between a handful of autonomy companies.)

He started asking me how I liked his Tesla and I joked “just fine as long as you’re the one driving it!” And he asked me what I thought about FSD which he’d just bought. He asked if he should turn it on. I said “not with me in the car” and he then laughed and asked how I was still so scared when I work with this stuff everyday.

I was like “Uhh it’s because I…” But stopped when he pulled over and literally started turning it on. I was like “I’m not kidding, let me out of the car if you’re gonna do this” and my boyfriend’s dad and brother started laughing at me, and my boyfriend still wasn’t saying anything.

His dad was like “It’ll be fine” and I reached over my boyfriend’s little brother and tried the door handle which was locked. I was getting mad, and probably moreso because I was tipsy, and I yelled at him “Let me the fuck out”

She’s a systems engineer who works on these self-driving cars, and she wants nothing to do with it? Does she know something the rest of us don’t?

Apparently, she does. Tesla has been faking demos of its self-driving cars, which I guess shouldn’t be a surprise to anyone following Elon Musk’s hype parade.

A 2016 video that Tesla (TSLA.O) used to promote its self-driving technology was staged to show capabilities like stopping at a red light and accelerating at a green light that the system did not have, according to testimony by a senior engineer.

The video, which remains archived on Tesla’s website, was released in October 2016 and promoted on Twitter by Chief Executive Elon Musk as evidence that “Tesla drives itself.”

But the Model X was not driving itself with technology Tesla had deployed, Ashok Elluswamy, director of Autopilot software at Tesla, said in the transcript of a July deposition taken as evidence in a lawsuit against Tesla for a 2018 fatal crash involving a former Apple (AAPL.O) engineer.

It’s OK, though, because they were trying to show what was possible, rather than what the car could actually do, even if Musk was claiming the car was driving itself.

“The intent of the video was not to accurately portray what was available for customers in 2016. It was to portray what was possible to build into the system,” Elluswamy said, according to a transcript of his testimony seen by Reuters.

Like, the idea of cars driving themselves and bypassing the fallibility of human drivers sounds nice, but it’s clear that the car’s software can be even more stupid and flawed than people. I wouldn’t want to share the road with these things, let alone be in a car controlled by some engineering gadget.

You know what I think would be far more useful? Software that detected when the driver was significantly impaired. You’re weaving all over the road, or you’re exceeding the speed limit, or it senses that you’re nodding off, and it fires off alarms to let you know you’re not safe, and if you exceed a certain frequency of warnings, it transmits alerts to the police. That would be a smart car, making sure that the driving software in the human’s head was operating adequately.

Knowing humans, though, there’d be a huge aftermarket in mechanics ripping out the safety measures.

It’s not a difficult choice at all

How’s it going, Mastodon?

Twitter rival Mastodon has rejected more than five investment offers from Silicon Valley venture capital firms in recent months, as its founder pledged to protect the fast-growing social media platform’s non-profit status.

Mastodon, an open-source microblogging site founded in 2016 by German software developer Eugen Rochko, has seen a surge in users since Elon Musk bought Twitter for $44 billion in October amid concerns over the billionaire’s running of the social media platform.

Rochko told the Financial Times he had received offers from more than five US-based investors to invest “hundreds of thousands of dollars” in backing the product, following its fast growth.

But he said the platform’s non-profit status was “untouchable,” adding that Mastodon’s independence and the choice of moderation styles across its servers were part of its attraction.

“Mastodon will not turn into everything you hate about Twitter,” said Rochko. “The fact that it can be sold to a controversial billionaire, the fact that it can be shut down, go bankrupt and so on. It’s the difference in paradigms [between the platforms].”

Meanwhile, on Twitter: