quiet, Uncategorized

Moderate people, not code

AppyGuide

The scope of the fediverse has been hotly debated recently. Are we a big fedi? Or a small fedi? Are instances just nodes? Or networked communities? Which Camp of Mastodon are we in? How far should our replies travel? How about our blog posts and Bluesky skeets? Should we welcome Threads? Or block them?

Should we open the fediverse to everyone, let them exercise freedom of association, embrace the inevitable Eternal September, and get good at managing the problems? Or should we learn from Twitter that a “global town square” has big downsides, try to prevent those harms from the beginning, and only expand online communities once we have their consent?

Should there be one internet? Or multiple, sometimes separate internets?

I have a vested interest in this topic. I build and run two bridges, Bridgy and Bridgy Fed, that push the boundaries of the fediverse by integrating traditional web sites and blogs. These bridges are small so far, but as I add Bluesky and other networks, I expect them to grow and attract more attention and stress test those boundaries. I have to decide how those bridges work, and this question squarely impacts those decisions.

If there’s a right answer, I don’t know it yet. I have thoughts, naturally, but I know other people have much more knowledge and experience here, all the way back to The Well and Usenet and Habitat, and I know I have more to learn. Not to mention that as a straight white guy, I have plenty of privilege to check, and not much lived experience of being harassed or mistreated online. This is one way for me to think out loud, work through ideas, ask questions, and hope for useful feedback.

Here’s one possible conclusion: Moderate people, not code. When you choose who to federate with or block or mute, don’t look at protocols, or networks, or software. Look at users, and communities, and their behavior. At the end of the day, those are probably what you really care about.

Context collapse, or where is the fediverse anyway?

Let’s take a step back and look at the WordPress ActivityPub plugin. Historically, it was easy to tell a WordPress blog and a fediverse server apart. The blog is an island. It has posts, and comments, but they stay on the blog. They’re not federated.

The fediverse server, on the other hand, is federated. It has local users, but it also shows remote users and posts from other servers. When local and remote users interact, those interactions flow across all of the servers involved. This is the fediverse we know and love.

When you install the ActivityPub plugin on your WordPress blog, it suddenly becomes a fediverse server too. It federates posts, replies, and other interactions with other “native” fediverse servers, like you’d expect. When someone in the fediverse sees one of your posts and replies, that reply federates back and appears as a comment on your blog.

Is this surprising? Is it a problematic instance of context collapse? Maybe! But why? The exact same thing happens between true, “native” fediverse servers. In both cases, the post and reply are public, and can be seen by anyone on the internet. The publics may differ somewhat, but in both cases, the reply is federated to a different place than it came from. On a mechanical level, there’s no clear difference between “native” and “non-native” fediverse servers.

This applies to other social networks too, whether bridged or native. When Mostr bridges a fediverse post into Nostr, it copies the post to Nostr relays, just like federation copies it to fediverse servers. Same with Threads, or Flipboard, or Tumblr, if/when they add ActivityPub support.

Is it a cultural problem? Blogs are a generation older than the fediverse, and grew up with different norms and user expectations. Blogs had mashups and GoogleBot and “pics or it didn’t happen;” the fediverse has memes and trolls and consent. Old school web sites feel different than “native” fediverse servers. Those expectations and feelings may not match the technical reality of ActivityPub, but they still matter.

This kind of context collapse happens entirely within the fediverse too, though. People on Mastodon and Pleroma tend to interact with each other more than with people on link aggregators like Lemmy and kbin, or video sites like PeerTube, or streamers on Owncast. Those communities all have their own cultures, to some degree, but they all still happily federate back and forth. Does everyone understand and expect that? Is it meaningfully different from federating with a blog?

Qui consentit

Context collapse is just one problem. Many early fediverse people were queer and trans refugees from mainstream social networks who left in search of somewhere smaller, safer, and more welcoming. They staked out a clear position of rejecting intolerance in the fediverse, paradox of tolerance and all.

For them, and others who see it as a safe haven, “small fedi” is more than just a preference. It’s a key part of feeling safe online. The “native” fediverse itself has long included bad actors and instances that routinely get defederated. Far right instances like Gab and Truth Social may technically support ActivityPub – some are even based on Mastodon – but the fediverse saw them coming and blocked them in order to prevent an inevitable flood of abuse.

Federating at the instance level is generally opt-out. Most server software defaults to allowing federation with other instances. When an admin defederates with an instance, they add them to a blocklist.

However, some instances flip this and federate on an opt-in basis. Unknown instances start out blocked; admins have to manually add them to an allowlist before they can federate.

Jon Pincus calls this consent-based federation, and I like it a lot. It’s bold, and not how I personally connect with people online, or how I think all instances should work, but I deeply appreciate the consistency that it provides admins who are strongly protective of their users. If you believe in small fedi, if you don’t trust Threads or #TwitterMigration or Eternal September, you should be able to start out closed, exercise your freedom of association, and choose who to federate with based on who they are and what they do.

This means focusing on people and communities more than networks or software. And not a moment too soon! Giants like Meta, Automattic, and Flipboard may not pose quite as much clear and present danger as Gab or Truth Social, but they have huge, mainstream populations that are new to the fediverse and still pose risks. Beyond the giants, individual web sites and blogs historically couldn’t speak ActivityPub, but now they do. Other decentralized networks use their own protocols, not ActivityPub, but bridges are closing those gaps.

Networks and protocols do sometimes have their own cultures. The fediverse grew on the backs of progressives, queer people, and others who the mainstream often saw as misfits. Bluesky famously found early traction with shitposters and TPOT. Nostr is full of Bitcoiners. Old graybeards like me still cling to the web, idolizing Yahoo Pipes and posting thinkpieces to our tiny blogs.

These are overgeneralizations. They have a kernel of truth at most, and as the networks grow, that kernel shrinks. And that’s the point! Whether ActivityPub or ATProto or webmention, the underlying technical protocol a community uses to interact online is a poor way to judge who they are and whether you might like them. Same with how their web sites and apps look, or whether they post toots or links or videos, or whether they call them replies or comments.

The best way to judge a community is to actually judge them. Look at who they are, what they say, and how they behave. If you’re responsible for a community, you’ll have your own bar for who you want to interact with. Fitness groups might not federate with baking schools. Jewish synagogues probably won’t federate with Nazi gangs. That’s great! Make those judgments for your communities, instance by instance, not by network or server software. Those sledgehammers are too big.

Users, instances, and mod services

Of course, even instance-level, consent-based federation is still a big sledgehammer. Much of the time, an instance itself isn’t rotten, it may just have a bad actor or two, or even just someone who made an honest mistake. User-level tools like blocks and mutes often seem like a better first step in these cases.

However, throwing people to the wolves on their own, naked, seems like the wrong idea. Most people who experience abuse online don’t have the time or knowledge or willingness to wade through it all and block themselves up to a sustainable level, nor should they. Pincus describes this well:

Even if you’re not an expert on online privacy and safety, which sounds better to you: “Nazis and terfs can’t communicate with me unless I give my permission” or “Nazis and terms can harass me and see my followers-only posts until I realize it’s happening and say no”?

Quite so. This isn’t a nail in the coffin of user-level moderation, but it’s a clear indictment when people have to use them in isolation, everyone for themselves. Same with admins; they shouldn’t have to constantly be on the back foot playing catch-up with every new troll farm and CSAM warren.

Fortunately, collaborative moderation tooling has made solid progress recently, much of it grass roots and bottom up. Email server admins were pioneers here with Spamhaus and DNSBL and other shared graylists of domains and IP addresses. The social space had shared blocklists too, followed by independent tools like Block Party. Fediverse-native services like Fediseer, The Bad Space, and FediMod now help admins share blocklists and related instance-level information. Even ActivityPub itself explicitly supports federated Block and Flag activities, and may eventually add Reports.

Other networks have ambitious ideas of their own. Bluesky has a platform for independent moderation and labeling services. These services might specialize in different areas, eg detecting CSAM or fighting antisemitism. Jewish Bluesky users could subscribe to an antisemitism mod service to proactively filter out abuse so they never have to see it at all.

The IndieWeb’s nascent Vouch protocol brings a web-of-trust approach to moderating webmentions. When you send someone a webmention, you can include a link to a friend-of-a-friend who knows you both, based on existing links between your web sites. The receiver can evaluate these links, determine whether that person “vouches” for you, and if so, they can accept the webmention and trust you to send more in the future.

Even Nostr, the land of crypto libertarianism and adversarial interop, has a thriving ecosystem of widely-adopted shared mutelists and moderated groups. Nostr tends to be an anything-goes kind of place, so maybe users need moderation tools even more than elsewhere, just to stay afloat? Or maybe not, who knows.

I love all this shared tooling for the same reason I like consent-based federation and communities over networks: it brings the moderation focus back to people, groups, and their behavior. Some of us want to connect far and wide, others want to lock it down and proceed with caution. Both are ok! People and communities feel like the right units to work with in both cases.

Opt-in vs opt-out

For bridge developers like me, the concrete question is whether to make them opt-in or opt-out. Does everyone need to turn on the bridge for themselves? Or should it work for everyone by default, include clear labels, and let people turn their accounts off if they want?

Opt-in is the conservative answer, and what some vocal parts of the fediverse seem to expect, at least for services that provide any kind of global indexing or search. (Bridgy and Bridgy Fed don’t, but still.)

However, for services like bridges that live and die by network effects, opt-out seems to be the only way to be broadly useful. If Alice opts into bridging her Bluesky account to the fediverse, people there will see her and her posts, but she won’t see their replies or other interactions. More importantly, people in the fediverse still won’t see anyone else on Bluesky.

All else equal, people tend to stick with defaults. Opt-in rates are famously low, regardless of what they’re for. (This is the premise of an entire pop-sci book, movement, and famously even a department of the UK government.) As an example, Mastodon made a big splash and press push for its opt-in full text search last September. After four months, one instance looked at ~800k users across the fediverse – two third of all active users! – and found that only 4% had opted in.

Certainly, of the remaining 96%, some knew about the option, carefully evaluated it, and deliberately decided against it. But realistically, most of them probably hadn’t heard about it, or didn’t know how to opt in, or forgot, or didn’t feel strongly enough to bother.

If bridges were opt-in, and I could only follow 4% of people on other networks, they would be drastically less useful. I know I’d be much less likely to keep building and running them. My personal interests don’t justify anything, of course, but the utility of these bridges might. I hear regularly from a wide range of people that they love Bridgy and Bridgy Fed, that they connect them to other people in ways that they might not otherwise, and that they find real, deep value in those connections. That wouldn’t happen, for the most part, if they were opt-in.

So…?

Like I mentioned earlier, I have more questions than answers. I’m keenly interested to hear more from people who study online communities and their health. I’ve been following our current debates with a close eye, trying hard to understand what it all means and what I should do.

First off, I’d kill for a thorough, comprehensive threat model of human interaction online. Threat modeling is an important technique from the security community that I’d love to see applied to human behavior more often. Could we bring together the infosec people, sociologists, and community managers to come up with a concrete threat model for developers like me to follow? That would be a huge help.

Otherwise, I’m ready to listen. If you’ve read this far, you can probably tell that I lean toward big fedi, inclusion, opt-out federation, and opt-out bridging. Even so, I know open federation doesn’t fit all communities. Consent-based, opt-in federation is a great option, especially when it’s based on communities instead of networks or protocols. Moderate people, not code.

At the end of the day, I’m just an engineer. I’m writing this because I need to type some Python code into Emacs, but I’m not wise enough to know what to type. Thank you for reading; thanks in advance for your feedback.

Standard

Leave a Reply

Your email address will not be published. Required fields are marked *