Most furries who are vaguely involved in tech have heard of the “Fediverse” or, at minimum, Mastodon, the most popular Fediverse software stack implementation.
There is a lot to be said about Fediverse, and it is a fantastic initiative from a technological point of view. With Fediverse software, you can build communities around a topic of interest and optionally interact with other users in other instances. It lets you own your data (at least from a server operator perspective), which is clearly very different from Twitter. The communities around Fediverse instances are tight-knit, often focused on technology and topics related to minorities or marginalised communities, like LGBT or sex workers.
It is also great because, as a server operator, you can choose who you allow to sign up for your instance. This makes it impossible to get banned for spurious reasons unless you don’t actually own and run the instance you signed up for, which shows the downsides of Fediverse.
Please note: Most of this post was written before Bluesky had any relevance and certainly much before Bluesky introduced federation.
Running a server is free!
Or so would most Fediverse admins say. For Mastodon, the documentation [archived version] doesn’t make it very clear what resources you need, but the cost is certainly not zero. If you search on the Internet, you will find the answer varies.
Of course, the more users your instance hosts, the more resources you will be required to provide regarding memory, storage, and computing power. You will also need to send e-mail for sign-up verification, which can be extraordinarily complex [archived version] for someone without years of experience working with server software. You will need a public IP address, probably of the IPv4 kind, because of connectivity problems regarding IPv4-only servers, and you will also need to dedicate time and effort to ensuring your software is up to date and with the latest security fixes applied. You will need DDoS protection, and you will most certainly need to dedicate time to moderation. Like it or not, eventually, spammers will find their way into your server, either because of open registration or through hacked accounts. You must respond to reports on time. Failure to do so can lead you to get defederated from the rest of the network.
Defederated?
Some servers have aggressive policies. For example, one notorious furry Mastodon server: vulpine.club
had rules that prohibited sharing copyrighted content, child pornography, gore, or nazi content, which is fair enough. But it also explicitly forbids the current President of the United States of America to sign up, a somewhat questionable rule.
It is okay to weed out undesirable people on the server you run and pay for. And I agree nazis and other hatred-oriented ideologies should be purged from the Internet. After all, it is your private property, and nobody is entitled to it. But things get a bit more complicated when you also apply these rules to other servers.
There are multiple ways to block in Fediverse. You can either block a specific user from your personal account or an entire domain from your personal account.
However, instance administrators have additional tools to moderate their instances and how others interact with them. Among them, they can remove instances from the federated timeline (for example, for massive instances with tens of thousands of users), force all remote content from an instance to be hidden behind a content warning (for example, for pornographic instances), make the posts in a given instance invisible to users until users manually follow people from said server, and the most extreme full-instance ban that applies to all users on the remote instance for all users in the current instance. The last one is highly problematic, and individual users cannot undo it.
While Fedi has powerful moderation tools, if abused, they can also have a devastating impact and further isolate already marginalised communities.
Let me elaborate:
Hypothetically, let’s say I run an instance called a.social
, and there is another instance such as b.social
. My rules explicitly forbid nazis and right-wing extremism.
Situation 1: What if the rules aren’t compatible?
It could be that b.social
is a tiny instance for someone and their very close friends. For this reason, it is possible b.social
does not have comprehensive rules. b.social
could have a simple set of rules such as “Live and let live. Be a good human being”, and that might be enough for b.social
‘s admin. But what if a.social
disagrees with it? What if a.social
‘s admin thinks the rules should explicitly rule out nazis? From this situation, it is not immediately clear whether b.social
does indeed forbid nazis from signing up to their instance. In reality, many admins would err on the side of caution and simply pre-emptively block b.social
due to an “incompatible Code of Conduct”. Unfortunately, this is more common than you probably think. Again, this does not make b.social
a right-wing haven, but without the consent of either a.social
and b.social
current and future users, both instances are now isolated from each other.
Therefore, it is worth questioning whether we should grant an instance administrator this much power, as they face few consequences when they misuse their power.
Situation 2: A user in your instance is a nazi
Accusing a user on a.social
of being a nazi or other kind of political extremist could lead to another similar situation developing.
An accusation is not automatically true. Due process and the presumption of innocence are fundamental to Western societies and the modern legal system. The ancient Roman legal doctrine stated, “Ei incumbit probatio qui dicit, non qui negat”—’ The burden of proof is on the one who declares, not on one who denies.’ This principle was pivotal in shaping English common law, which robustly supports the maxim that ‘an individual is innocent until proven guilty.’
As a society, we have decided that it is preferable to allow some guilty individuals to remain free rather than to incarcerate the innocent. This decision implies that false positives (wrongful convictions) are more harmful than false negatives (unpunished crimes). Although there are always trade-offs, most people, including myself, consider this stance a reasonable compromise.
In the Fediverse, when an instance faces accusations of hate crimes or worse, ideologically aligned instances often react by blocking and targeting the accused without due process. While understandable, given that Fediverse operators maintain autonomy and are not obligated to any external authority, such actions can fragment the network. An overly cautious approach to moderation might also restrict users’ ability to connect with others, potentially isolating them from friends.
Situation 3: A questionable post?
There could also be the case of a user in a.social
complaining about an individual post in b.social
. This post could violate the code of conduct of either instance or in bad taste. It is a tricky situation because, as an instance administrator, you would probably have to talk to b.social
’s administrator and ask about this user and this specific post.
Suppose the issue is not resolved amicably or the remote administrator refuses to reply for a long time. In that case, it is up to you to moderate the remote server, which also places a heavy burden on you as a Fediverse moderator, as you have to moderate both your server and remote servers.
Data portability
When I mention these Fediverse shortcomings, people quickly point out that if you are ever banned from a server, you can go somewhere else or run your server. Which is only partially true:
Being banned from a server doesn’t mean you are allowed to take your data with you: While Mastodon’s default behaviour is to allow banned users to access a limited subset of functionalities that allow, among other things, to export your data, this does not mean that an admin can’t simply invalidate your login information or even completely delete your account along with all posts and social graph. So you retain access and ownership only for as long as you continue saving backups regularly.
Only some data can be exported and re-imported: Most notably, Mastodon doesn’t allow exporting your post history or muted words. You can, however, import followers, follows, muted users, and blocked users, but really not much else.
Account migration is not available once you are banned: Fediverse has a mechanism that allows you to migrate accounts and have your old account seamlessly redirected somewhere. This allows your followers to follow your new account transparently without them having to search and follow you specifically on your new user. But this is impossible once you are banned; you must start from scratch.
Of course, this is in addition to the fact that telling a user to “simply migrate somewhere else” is not helpful, constitutes victim-blaming, and shifts away the responsibility of moderation and proportionality when it’s precisely the moderator who caused this situation to develop in the first place.
Massive adoption barriers
It is a controversial topic, as many Fediverse users believe that choosing an instance is very simple. While signing up is relatively simple, and there’s even a website [archived version] highlighting the different servers you can join, including topic- and region-specific servers, the reality is more nuanced. As not all servers are equal, moderation policies, software stack, update policy, and even external factors such as what servers are blocking the server you want to register to, and possibly what servers your friends are, you may sign up for an instance you like only to find out after a few months using it that a few instances you are interested in are blocking your server. And often, there is no explanation or evidence anywhere for this block. @MissingThePt@mastodon.social
puts it well in a humorous post [archived version] that is not any less true:
Choosing a Mastodon instance is easy once you understand each instance’s values, customs, belief systems, and inter-instance alliances and feuds dating back 1,000 years.
As a new user without any background information on the most significant instances, who runs them, and what the general topic and community are like, you might be tempted to sign up for one of the many widely blocked instances.
Identity
The Internet is primarily an anonymous service. On Fediverse, you are generally not required to provide your government ID to sign up for a server. You can create tens or hundreds of accounts in different servers with different names; nobody will know the same person owns them unless you explicitly announce it.
This anonimity also means that the identity of anyone signing up to your server cannot be verified. For example, I (used to) have around four accounts on different Fediverse servers under my name, A* Ulven. If someone were to sign up to another server, there would be no way to check whether that’s actually under my control. This has implications for spam and can cause reputational damage to the affected person.
Fediverse toxicity
Now, this is a subjective matter, but I do believe the culture of BDFL promulgated by Fediverse through anyone and everyone having the ability to run an instance and retain complete control over it is intoxicating many administrators with a sort of sense of purpose that seems to include moderating other people’s social experiences. This is a slippery slope, and this kind of moderation must have checks and balances and not simply be a matter of the whims of a given moderator on a particular day. Unfortunately, this is presently not the case.
The Fediverse moderation model is hilariously unscalable
Another complicating matter is that every instance is left to fend for themselves. If you run a large instance with hundreds of users, you will likely find trustworthy peers to help you moderate your server’s content and ban undesirable users. Still, you will also have to use these moderation resources to moderate foreign content from other servers. Because of this, smaller, less-staffed instances are at a much higher risk of moderator burnout and ineffective moderation.
For this, server admins take many shortcuts, including defederating entire servers upon the discovery of a particularly inflammatory post on a foreign server, copying moderation lists from other admins, sometimes indiscriminately banning servers that might otherwise have been wrongly banned, and further making the Fediverse experience fragmented and miserable for everyone involved.
Spam is a very real problem
I briefly touched on spam concerns in the previous text, but this is an extremely serious matter that should have every single developer, server administrator, and user screaming from the top of their lungs and working extremely quickly for a solution.
I have been running an e-mail server for over twenty years, including production servers sending millions of e-mails a week, personal projects, and small-scale hosting.
Now, Wikipedia has an entire article dedicated specifically to anti-spam techniques, mostly focusing on e-mail, but many of the features listed here can be used or adapted for Fediverse.
Fediverse will inevitably have to use a combination of techniques, including an aggregated score (similar to anti-spam software like SpamAssassin, which looks for specific keywords and behaviours in spam messages), authentication, and sender reputation lists.
Mastodon, the most popular Fediverse software stack on the Internet scale, is still in its infancy, gaining popularity around 2017 and massive user growth in 2021 and 2022 due to events surrounding Twitter and moderation and privacy policies. E-mail spam has been a topic for nearly thirty decades now, and it’s a cat-and-mouse game whereby e-mail server operators deploy techniques that spammers eventually find how to circumvent. Unfortunately, I don’t believe a different way to tackle this problem exists. If so, e-mail spam would have been solved decades ago, and Fediverse will ultimately have to face reality and increase anti-spam efforts significantly.
This is not a conspiracy theory or a theoretical attack. Several spam waves have already occurred, and they will worsen in intensity and frequency as the network grows in popularity and audience potential.
Bluesky is different
You may like, dislike, agree with, or disagree with Bluesky’s design principles, but the authors have taken a different approach to moderation, which I believe is worth exploring.
Actually decentralised moderation
Instead of relying on your instance administrator to weed out bad actors, you can subscribe to moderation lists. I am not going to recommend any particular list because it is important to trust the person running the moderation list, and this is a very personal matter.
Moderation lists allow individuals to moderate and take ownership of their Bluesky experience without being forced to run a server. They can also allow users in other instances to use their list without signing up to their instance.
Not actually decentralised design?
I never claimed Bluesky to be perfect or even better than Fediverse. It is different for sure, but some people argue Bluesky is not truly decentralised [archived version]. And I agree with them, but I don’t think the Fediverse decentralisation model is a big deal or even a positive design choice for social media.
Yes, decentralisation has many good features, but moderation doesn’t work well with the Fediverse decentralisation model, as shown.
This could also be a matter of taste, if you are willing to accept certain tradeoffs.
Conclusion
This is not a criticism of Fediverse as a software stack, either from a technical or social perspective, but more of a practical demonstration of the drawbacks of decentralised moderation.
Moderation is one of the most complex subjects in sociology. This is especially true because the Internet is generally anonymous. Again, this has pros and cons. Among the most significant advantages of an anonymity-by-default approach is protecting minorities and otherwise socially disadvantaged people. But it also has many downsides, such as spam, the severe potential for doxing and character assassination.