The problem of finding ways to combat bad speech


At a time when we are flooded with vile rhetoric from all over, especially on social media, it becomes difficult to know how to respond. The easy availability of AI engines to create realistic but fake text, audio, and video content has enabled the scope of such hate speech to explode. There have been calls for the social media platforms to more closely monitor the content of their sites and prevent such abuses but since the sites want people to spend time there, they are reluctant to take more than the mildest of steps.

The platforms Meta and X/Twitter are the worst offenders but even relatively staid ones like Substack have been roiled by controversy.

In January 2022, the Center for Countering Digital Hate accused Substack of allowing content that could be dangerous to public health. The Center estimated that the company earned $2.5 million per year from the top five anti-vaccine authors alone. The three founders responded via blog post affirming their commitment to minimal censorship.

Substack faced further criticism in November 2023 for allowing its platform to be used by white nationalists, Nazis, and antisemites. In an open letter, more than 100 Substack creators threatened to leave the platform and implored Substack’s leadership to stop providing a platform for political views with which they disagree. In response, Substack CEO Hamish McKenzie said the company would continue to allow the publication of extremist views because attempting to censor them would make the problem worse. Creators like Casey Newton, Molly White, and Ryan Broderick left the platform as a result.

The argument of free speech absolutists who oppose any attempts to censor content is frequently stated as “The best response to bad speech is more speech”. In other words, the way to combat speech that one abhors is to speak up against it and, in the free marketplace of ideas, the better speech should ultimately win.

There are of course some obvious problems with this facile argument. One is that the marketplace of ideas and speech may theoretically be free and open to all but in practice it is heavily tilted towards those who have money and other resources. It has always been thus. When it came to print, radio, and TV, there were steep startup costs to creating a platform that meant that mainly the wealthy could create sources, and thus control the range of information disseminated. With the internet, that entry cost dropped to almost zero, enabling entities like this blog to enter the global marketplace of ideas easily.

But it is one thing to be able to have the ability to say something, it is quite another to have one’s views heard widely and it is here where the playing field gets heavily tilted again. The large platforms, either print, radio, TV, or the internet social media sites, have the ability to decide which views get amplified and which get buried in the noise. The algorithms that drive the process seek what they call ‘engagement’ which means having readers spend as much time on their sites as they can be coaxed to do, so that the sites can reap in advertising revenue. Hence we are fed with information that feeds our prejudices as well as inflame our passions, often by means of falsehoods. So while it is all well and good to say that we should combat this pernicious trend by giving out more accurate information, the fact is that it will be drowned in a firehose of falsity.

But an even more serious problem with “The best response to bad speech is more speech” mantra is that while it may be a plausible argument in the world of ideas or political arguments, for some speech, there is really no better speech available. Take for example, the recent growth of the ugly phenomenon known as ‘nudification’ that I posted about recently that have taken advantage of Musk’s Grok AI engine and X/Twitter platform to create and widely disseminate doctored sexualized images, mostly of women and children.

What possible forms can the ‘more speech’ or ‘better speech’ take that can combat such things? Disseminate images of cute animals or undoctored images of real people? Such images already exist. One can condemn the bad actors who create such false images but that is about it. People like Musk and Zuckerberg can simply ignore the vile things that they have enabled that enrich them while indulging in high-minded rhetoric about upholding free speech. It is only the threat of governmental action to suppress them that gets their attention because that affects their profits.

It is undoubtedly dangerous for governments to use their power to stifle some speech. The slippery slope argument can be invoked to say that opponents of free speech almost always start with speech that is manifestly offensive in order to breach the free speech walls so that governments can then attempt to suppress political speech that threatens the government and the powerful.

While there is some merit to that argument, I cannot see that the ‘anything goes’ argument when it comes to speech is a better option. When one draws lines in the sand and says that some speech should be curtailed, there will always be arguments about where that line should be drawn because there will always be people who push the limits. We cannot avoid that debate and will have to deal with it on a case-by-case basis.

Comments

  1. Pierce R. Butler says

    Even more perniciously, the Supreme Court’s Citizens United decision to frame campaign donations political bribes as “free speech” lets billionaires and billion-dollar corporations drown all the rest of us out.

    We … will have to deal with it on a case-by-case basis.

    I think we’ll have to come up with systematic principles -- but so far I have no formulations to offer…

  2. garnetstar says

    Does the free speech exception known as “shouting fire in a crowded theater” mean “potentially causing physical harm to people”? If so, a lot of the vile internet speech would fall under that.

    However, if it does, that would take years of wrangling-out in the courts and establishing some kind of precedents and legal standards and tests and etc., etc. And, as Mano says, a slippery slope. And then, would it be illegal in the individual who posted, and the police are supposed to arrest that person, or would the poster only be liable in a civil suit by the people potentially harmed, or what? This seems quite impractical. But it is true that this openly-say-the-most-vile possible-things is very destructive to the greater society.

    The only way, in the old days of print media and the like, that seems to have worked against this was to hold the company (newspaper, broadcast chanel, etc.) liable as well as the individual. The big tech companies say, and with some truth, that that would destroy social media altogether. Or, at least in its present form (and the problem with that is….? For anything outside of big tech’s profits? Never mind, I just answered the question.)

    I’m not sure why, when the internet started, that the law of the platform also being liable wasn’t applied? As it was universal. I suppose that everything was just too new. Well, it isn’t now, how the internet operates is understood. Perhaps this law ought to be thought of seriously, if big tech should ever allow it (ha!)

Leave a Reply

Your email address will not be published. Required fields are marked *