It’s going to get ugly on the AI front


Normally, I’d say that Sam Altman deserves any pushback he gets. But the AI hatred seems to be getting a little too intense.

On the morning of Friday, April 10th, a 20 year-old Texas man named Daniel Alejandro Moreno-Gama was arrested for allegedly throwing a molotov cocktail at Sam Altman’s mansion on Russian Hill in San Francisco. Less than two days later, police arrested 25 year-old Amanda Tom and 23 year-old Muhamad Tarik Hussein for allegedly firing a gun at the same house from their car before speeding away.

Earlier the same week, and thousands of miles away, an unknown assailant fired 13 shots into the front door of city councilman Ron Gibson, who had just voted to approve a new data center in Indianapolis against a groundswell of public outcry. A sign that read “NO DATA CENTERS” was left tucked under the doormat.

I can understand why all the AI-hate: data centers are environmental catastrophes, they represent a gross invasion of our privacy, they don’t seem to contribute much of value to society, but wow, they sure help improve billionaires profits. Unfortunately, in addition to a rational opposition, there are also crackpots with bizarre paranoid fantasies.

Little is known about the motives of Tom or Hussein, or the politics of the Indianapolis shooter, but reporters and the online commentariat quickly dredged up Moreno-Gama’s Discord chats and Substack posts. He was a reader of rationalist and AI doomer Eliezer Yudkowsky, who argues, as the title of his last book puts it, if Silicon Valley builds a “superintelligent” AI, “everyone dies.”

Yeah, if you’re citing Yudkowski, you’re a victim of extreme derangement. I guess it’s predictable that if your reaction is to throw molotov cocktails at people’s houses, you’re probably not building your case on a sound foundation. AI is not superintelligent, or even intelligent at all, it’s a tool that can be used by bad people to do bad things. Unfortunately, it’s also the case that AI proponents have built up this gigantic edifice of hype, pumping up the imagined power of AI to the point that they are actively asserting that it might lead to the end of humanity.

If you take at face value what the AI executives themselves have been saying for the last decade, that an AI powerful enough to make humans go extinct is nascent, then acting with force to stop it would be a rational action. The AI industry and its executives—including Sam Altman—need to own this outcome, not blame it on Yudkowsky, safety researchers, or worried activists who take what they say literally.

That’s fair. The people who have pumped up the hype are reaping what they have sown.

The nonsense promoted by the Less Wrong crowd isn’t the real danger, though. This is the real danger:

Inequality is through the roof. A bona fide tech oligarchy is ascendent, buffeted by leverage provided by AI. Its data centers, which bring few jobs and hike electricity bills, are enraging communities on the right and the left. Slop is everywhere. AI-generated art and text is undercutting creatives, powered by pirated, non-consensually ingested work. Employers from Amazon to Block to Duolingo to Meta are firing tens of thousands of workers and citing AI as the reason. AI may one day cure cancer, we’re told; great, even if we believe that, who will be able to afford the treatment?

That’s the anger fueling the anti-AI violence. To the handwringing AI industry insiders blaming doomers and poor messaging, ordinary people are saying: Wake up. We have good reason to hate AI and the people who profit from it. And yes, as people get desperate, as young people increasingly feel like AI elites have mortgaged their future, as residents who vote to regulate AI or ban local data center projects only to see their will overridden in favor of industry interests—well how do you expect them to feel? What do you expect? There is a distinct risk of further escalation.

If I had the opportunity to vote to stop the construction of a local data center, I’d take it, no question. I’m not at the point of throwing molotov cocktails, though. At the rate this country is falling apart at the hands of the oligarchs, give me a year to come around.

Comments

  1. robro says

    Sadly, Sam Altman is kind of passé these days. The new techno-boogie man is Anthropic which was founded by former OpenAI developers. Their Claude AI is now threatening the jobs of developers in the tech industry. The promise is fewer high-priced geeks and nerds will be needed in the not-too-distant future when anybody can have Claude Code produce the tool they need.

    If you’re interested in getting some insight into AI in general, and ChatGPT specifically, checkout Professor Casey Fiesler on YouTube (here). She’s a PhD teaching “AI Ethics” at University of Colorado Boulder. I can’t verify the veracity of what she says, but she’s apparently knowledgeable about the field.

    Her main mantra is “AI isn’t magic.”

    She’s also a lawyer…which is interesting because I personally think there are a lot of legal mine fields waiting for AI. Some of them are already emerging such as genAI producing text that is essentially copied from copyright protected publications. Tons of lawsuits won’t kill what’s being called “AI” these days but I suspect it will curtail some of the enthusiasm. Lawyers have a lot of power in tech companies.

  2. Larry says

    wait until electricity and water rates spike even higher because data centers are consuming far more than their share. we may see a real revolution then.

Leave a Reply