Ever since Elon Musk’s bid for Twitter was accepted, the social media site has been ablaze with rumours that the company is already making changes to how users and information are handled on the site. Many claim shadow bans have been lifted. Others say previously filtered or constrained feeds have been liberated, and others still that follower numbers have finally been allowed to grow or contract organically.
A lot of this seems far-fetched to me because the deal isn’t even finalised yet.
And yet, I have to admit I too have perceived strange fluctuations in follower statistics since news broke about Elon’s successful takeover. The day after I seemed to lose about 700 followers. I presumed this was due to those threatening to leave the social media site if Elon took over making good on their threats.
The incident did, nonetheless, make me think about my overall Twitter experience in recent months and the reduction in both engagement and Tweet circulation I thought I had detected. The realist in me had put this down to factors like poor quality tweets from me. But thinking about it more made me recognise how patchy my own feed had been recently too. People I am used to seeing pop up all the time like Joe Weisenthal (@thestalwart) have been strangely absent for months. Crappy promotional clickbait, meanwhile, has been everywhere.
Intrigued, I decided to have a go at the “Have I been shadowbanned tests” for both me and my alter-ego parody account @davosdeville. The results were both illuminating and puzzling!
— Izabella Kaminska (@izakaminska) April 27, 2022
It’s important to note I used two separate services to test this. This one and this one. Both returned similar findings, with the only difference being that @davosdeville didn’t have a ghost ban on one of them. I had a friend replicate the test externally to make sure it wasn’t a local server issue too. His results matched mine.
Searches on a number of other former colleagues, journalists and finance commentators came up clean (including @zerohedge). Albeit not Joe Weisenthal, who by his own admission showed up similar issues. The only thing in common Joe and I have are a former colleague and the fact that we both opt to remain unverified.
An hour later or so, however, I was perfectly clean again. (I’m not sure about Joe.)
The @davosdeville account, however, retains a search ban result even at pixel time.
Another important consideration is that when I tested for the search ban in practice, there was no hint of it. To run this test you need only log out of Twitter, pull up an incognito window and run a search. Both accounts worked normally.
This made me conclude the metrics from the shadowban test sites were probably not to be trusted.
I then stumbled across a truly revelatory piece of information that did offer some insight into what was really going on.
As someone helpfully pointed out on Twitter, the company sometimes makes changes to the default on home Timelines, switching them from “latest chronological” to the algo-curated “top Tweets” stream. There is an option to reset the stream to “latest Tweets” but sometimes (as in my case) you don’t notice you have been switched at all.
When you do reset the stream, the trusty chronological one swiftly returns featuring all the old names you may not have realised you were missing from your life.
It seems highly likely to me as a result that Twitter’s experimentations with “curated” timelines are what is giving everyone the sensation that some accounts or themes are being boosted or suppressed.
As the Verge reported on March 10, Twitter first began distributing algorithmically curated timelines in 2016. The company’s penchant to tweak the defaults on these timelines, however, has remained constant. The latest significant push on that front came around March, 2022, which just happens to coincide with when I first perceived a reduction in Twitter follower engagement.
It seems highly likely to me that my inability to see @thestalwart’s tweets was connected to the algo timeline inadvertently becoming set as a default option on my account.
On one hand this is reassuring. Nobody is actually being shadow banned. Less visibility is simply a function of the Twitter matching algo not prioritising your tweets to your followers for relevance reasons. It’s a mechanism designed to enhance user experience.
On the other hand, being able to sneakily roll out changes to default settings that impact the visibility of those you follow is an immensely powerful tool. The timing can be used to directly influence what information is or isn’t consumed. And the whole thing can be plausibly defended as “experimentation” for “user experience” purposes.
That Twitter’s last great timeline experimentation came just when a major standoff between the West and Russia was taking shape seems relevant in terms of how this capability can be deployed for political perception management purposes.
My recommendation is that if there are accounts you really want to keep an eye on, it’s best to set notifications for them. Returning to a chronological timeline helps immensely too.
Whatever the truth, what I do know from experience is that the concept of shadow banning by media enterprises is not imaginary.
I know because I’ve seen it used by media organisations first hand. When the function was first rolled out by developers at the FT for comment moderatorion it became known as”being Bozo-ed”. The tool made some of us feel very uneasy. It felt highly manipulative. I personally feared it had the potential to undermine users’ mental health because of how it created a false impression that you were being listened to when you weren’t really. This can (I think justifiably) lead to paranoia. It can also lead to a terrible waste of resources on behalf of the poster, who is unwittingly dedicating time to writing commentary for no purpose at all. Not very fair.
For me, it seems wrong — especially in a democratic system — to give people the false impression they have a voice when they don’t really. At a minimum it’s dishonest. If you’re going to suppress people’s voices they need to know this is actively the case so that they can do something about it.
Who audits the numbers?
There may be an even bigger issue at play, however. My broader concern right now is that much of the statistical info being shared with users by social media companies could be totally wrong or intentionally misleading. This is important because of how the data are used to generate revenue through sponsorship and advertising.
Who, after all, audits these largely self-generated numbers? Surely it’s not the run-of-the-mill auditors?
Perhaps someone who is more familiar with social media SEC filings might be able to explain this to me?
Either way, I think it’s important for us to understand how trustworthy the data is to avoid system-compromising blowback.