Twitter’s feed algorithm is harmful


We all know that Twitter is a cesspool of spambots, far right users, incels and outright Nazis. We know this has much to do with who gets promoted by the algorithms they use. One of these algorithms is the feed algorithm.

If you don’t know, most social media don’t show you just the posts from people you follow, in the order they are posted. Instead they fill your feed with suggestions – and they post the posts completely out of order. This is nominally done to create engagement, but in Twitter’s case, it is also used to push specific transphobic, misogynist, racist viewpoints.

In Twitter you can choose between algorithmic suggestions and getting a chronological timeline of the posts by the accounts you follow. If you choose a chronological feed, it will revert ba ck to the algorithmic feed after a while (or at least it did so when I was active on Twitter).

An interesting study in Nature by Germain Gauthier, Roland Hodler, Philine Widmer & Ekaterina Zhuravskaya, shows the danger of Twitters algorithmic feed

The political effects of X’s feed algorithm

Feed algorithms are widely suspected to influence political attitudes. However, previous evidence from switching off the algorithm on Meta platforms found no political effects1. Here we present results from a 2023 field experiment on Elon Musk’s platform X shedding light on this puzzle. We assigned active US-based users randomly to either an algorithmic or a chronological feed for 7 weeks, measuring political attitudes and online behaviour. Switching from a chronological to an algorithmic feed increased engagement and shifted political opinion towards more conservative positions, particularly regarding policy priorities, perceptions of criminal investigations into Donald Trump and views on the war in Ukraine. In contrast, switching from the algorithmic to the chronological feed had no comparable effects. Neither switching the algorithm on nor switching it off significantly affected affective polarization or self-reported partisanship. To investigate the mechanism, we analysed users’ feed content and behaviour. We found that the algorithm promotes conservative content and demotes posts by traditional media. Exposure to algorithmic content leads users to follow conservative political activist accounts, which they continue to follow even after switching off the algorithm, helping explain the asymmetry in effects. These results suggest that initial exposure to X’s algorithm has persistent effects on users’ current political attitudes and account-following behaviour, even in the absence of a detectable effect on partisanship.

There is no disclosure of Twitter’s ranking of suggestions, so people have no way of knowing that using the algorithm. will expose them only to one biased, and often highly incorrect, worldview.

This is something that both the EU and the US need to address legally. No platform should push one specific viewpoint over another, and definitely not in such a hidden way. Yes, the usership of a site have biases, which will probably reflect in the algorithmic feed, but the bias should not be pushed by the website itself.

Leave a Reply

Your email address will not be published. Required fields are marked *