New on OnlySky: AI and the post-truth era


I have a new column this week on OnlySky. It’s about whether AI is making it impossible for voters to trust anything they see or hear.

Zohran Mamdani, a socialist candidate for mayor of NYC, made waves when a Republican opponent accused him of using deepfake technology in his ads to pretend he was fluent in Spanish. It sounds too ridiculous to credit, but it’s just the cresting wave of a problem that’s only going to get bigger in coming years. What happens when anyone can create a perfect audio or video clone of anyone else on demand?

It’s not only that unethical politicians will use deepfakes to frame their opponents for things they didn’t do, although that tactic has already been tried. Just as troubling is the possibility that politicians who genuinely committed misdeeds will try to evade accountability by insisting their opponents are making deepfakes of them!

Read the excerpt below, then click through to see the full piece. This column is free to read, but paid members of OnlySky get some extra perks, like a subscriber-only newsletter:

This fits with what we know about human psychology. People with an ideological commitment excel at coming up with reasons to reject evidence that challenges their preconceptions. Young-earth creationists say that dinosaur bones were planted by Satan to test believers’ faith. Conspiracy theorists say that the omnipotent conspiracy plants false flags to lead the public astray. Even scientists, when defending a cherished hypothesis, can argue that contrary evidence is misinterpreted or won’t be replicated.

This is an extension of that trend into politics. The political arena has always been a domain of lies and exaggerations, but we may soon see untruth proliferating like never before. AI gives voters from across the political spectrum a ready-made excuse to wave away anything that casts doubt on their candidate.

Continue reading on OnlySky…

Comments

  1. Snowberry says

    “A lie can travel halfway around the world before the truth can put its boots on” has been attributed to many different people, Mark Twain most commonly… though it’s not clear if most of those people ever said anything like that. (Winston Churchill almost certainly didn’t.) The earliest verifiable version of this (albeit stated less succinctly) was made by Johnathan “A Modest Proposal / Gulliver’s Travels” Swift in 1710, but even that was in service of an examination of the concept in general, implying that the idea at least is older still.

    Certainly, the speed at which BS can be generated and spread has been slowly increasing ever since the invention of the printing press, and AI may be turbocharging it, but I’m not sure how much difference that actually makes. There is only so much BS that any one person can be realistically exposed to; I may be wrong, but it seems like it’s already oversaturated to the point where even more of it just results in fewer people having access to the same unrealities (for lack of better description) making the BS-infected people increasingly disconnected from everyone, but especially from each other. I’m not sure if that’s all that useful to even malign actors; simply adding more to the chaos and confusion is not even remotely a guarantee that a significant number of people will latch on to your untruth in particular.

    If there’s any difference, it’s in that video can no longer be trusted, but then again, video didn’t exist for most of history and wasn’t even a major form/source of evidence until fairly recently. And it’s not like “we all saw it on live TV” “Lies! Lies! False memory! Fake video!” is even remotely new, or even people believing that said live videos were faked long before the technology existed.

    Also long before the current AI boom I had a well-developed sci-fi idea in my head (never really wrote down) involving a future where some types of actions and contracts should never be considered ethical unless there was some sort of near-perfect, virtually unfakeable, virtually unalterable records verifying consent, and in some cases, that proper procedures were followed… which lead to the development of “permacams” in the late 21st century. Because a much bigger potential long-term problem, which I am surprised that nobody seems to be talking about yet, is the manufacturing of fake consent.

Leave a Reply

Your email address will not be published. Required fields are marked *