I can’t talk much about my day job on here. It’s nothing exciting or wild, but you know, there are a million boring reasons why that’s a good idea for lots of people. My day job is a kind of social work / bureaucracy type thing, where in a call center environment I help people with issues related to social programs.
This is the most emotionally and intellectually challenging job I’ve ever had, and the more emotional a call gets, the more likely I am to make mistakes on the intellectual side. Gotta be a cool-ass customer. When you’re in a tight situation and you’re talking to a bureaucrat, you want to plead with them to see you as human, so they give you a good deal. That is a mistake. Convey the seriousness of your situation accurately but try not to make the bureaucrat genuinely sad for you, because their job is complicated as balls and they are more likely to make mistakes that fuck you over if they are not thinking clearly.
People make mistakes, because of emotional and intellectual challenges like this. Administering laws and policies related to multiple programs federal and local and the interactions between them, it’s a snarl of contradictory shit driven by the conflicting political imperatives of “having bare minimum human decency” and “never give anybody a nickel they didn’t break their body in half for.” AIs make mistakes because their intelligence is alien and simplistic, their relationship to their output about math rather than true understanding. Between humans and AIs, who makes more mistakes?
Humans, are you fucking kidding me? The public will benefit by everybody in my position being replaced with AI.
If this is done with wisdom, which hardly seems likely given the Klown Kar Kabinet, it will move in stages. First people like me will be able to use internally trained AIs to seek the policy and procedural information we’ll need to handle a call. In parallel, there will be an automated phone AI to handle easier things if people are willing, thus getting its training in. When the AI looks good enough and enough people are willing to use the automated system, it should graduate to having bureaucrats like me take responsibility for all critical inputs, for the reasons of checking hallucinations, fraud caused by bad actors manipulating silly bots, and to have a human that can be held to account for bad mistakes. Lastly, if you can go a few years with an acceptably low rate of failures for the system, you let the AI take over completely, and only retain a small core of highly trained humans for support.
The biggest risk to the program I think is “AI whisperers” that can talk the bots into accepting con jobs. Another significant risk is the system being programmed with bad values, discriminating against callers on various grounds. Something I didn’t know well before I took this job is that some cisgender women sound like men on the phone – especially older black women – and you cannot reasonably judge age by a person’s voice. Some people young enough to be my child sound like seventy-year-olds, some eighty-year-olds sound like me. The pitch and grit in a voice can give you a clue, but you need more than a clue to judge if a voice on a line belongs to a person of a given demographic. I could see misguided anti-fraud measures causing bots to treat trans women and black women like they’re con artists, to speak nothing of the risks others have noted in AI being used by insurance companies like (see image in my sidebar yo). But my employer does not have the profit incentive of private business, so the last should, in theory, not happen.
In a country with reasonably funded social services, you’d have a large contingent of standby people waiting to take over during system outages, who can spend all day studying the ins and outs of the policies, get to where they can do really well in the edge cases where their involvement is necessary. But the US will never be that country, so the AI will be an excuse to downsize everyone like me into the streets.
I’m the resident pro-AI weirdo on a typically anti-AI leftist blog network. How do I feel about it? If the public is better served by bots, bring on the bots. I’ll have to desperately work three jobs to pay my mortgage, I’ll feel the pain really badly. But if the public is better served by bots, they deserve what better helps them – especially if it’s more economically efficient, allowing those social programs to make the best use of their limited funds.
I will have to eat shit on that deal. I don’t imagine this is going to happen until several years from now, but like I said earlier, who knows with these clowns? Maybe I eat these words around the time I’m eating my hat to survive, but from where I stand now, I say do what’s best for the most people.
Likewise, I find it impossible to believe that we can’t develop a self-driving car that makes fewer mistakes than a human. When that happens, the haters should maybe acquiesce to the good of saving literally tens of thousands of lives a year with AI. Call me wacky.
In a sense that last one is a strawman. Many AI detractors are situationally OK with the tech, and could be moved by common sense and hard numbers, on a case-by-case basis. That’s fine. The same way I’m glad there are anti-death penalty absolutists working to make the world a less nasty place, even while I don’t have very strong feelings about it, I’m OK with there being a reflexively anti-AI contingent that looks for all the possible failings of such systems and hectors The Man about it.
But I’m pretty sure there are literally millions of jobs that AI is going to kill over the next decade, and society is going to have to figure out what to do about that. Because if a human can be *reasonably* be replaced by an AI, increasing the benefit to society, that should happen. Even if I’m wrong about that? It’s going to happen, and the problem is worse – because we’ll be unemployed with essential services administered by a pyramid of flaming cybertrucks.
Good luck to everybody, whatever the future brings. Perhaps I’m a little unreasonably optimistic about the possibilities.
–