Skip to main content


An adblock-like solution for the fediverse. Thoughts?


I see the problem that people could use these lists as some kind of recommendation.
This entry was edited (3 weeks ago)
Some time ago I created a #Lemmy post + #SocialHub topic of making #Moderation a more native concept to the #Fediverse.

Federated Moderation: Where moderation actions, both admin-level and personal federate as metrics one can decide to act on.

(Extension to that is Delegated Moderation, but that's not relevant here)

https://lemmy.ml/post/60475

hecker reshared this.

I will have to look at this in more detail to understand. Am curious
Via @csddumi on a different discussion I was pointed to the #APConf2019 talk by @emacsen on the same topic. Very interesting watch:

https://conf.tube/w/8TLrJAfKcViUYGvYPMAKT4
Via Yusf on #SocialCoding matrix chat, I was reminded of the very interesting thesis on #TrustNet

Here is the link: https://cblgh.org/trustnet/
Yes but considering right now we have no clue who banned what then maybe if we could see in the instance's description what lists are enabled or not, could give us a more transparent overview of such decissions.

2 people reshared this

"I wand this account owner to be able to respond and maybe delete some of their content in order to avoid suspension. " makes sense
it is easy to get on such a list and hard to get off
Yes this is an issue I agree. But at least if you know there is 1 place where you can see if you are on such lists and argue against if you are one of them, seems better than not knowing at all. Right now for example I have no clue if my instance is blocked by other or not. It is not at all transparent.
What I would like it so be able to set accounts on my servers as unlisted, so that all their posts don't show up in remove public timelines.
Good idea!
Also I would like to be able to receive reported accounts, so I as an admin can decide what to do about an account. I would like this process to adhere to the laws applicable, so the reported account gets notified and I want this account owner to be able to respond and maybe delete some of their content in order to avoid suspension.
I agree with this too.
The list would violate the GDPR for any accounts listed which identify an individual; e.g. if their nick is their full name. If the lists are diligent to only include (pseudo)anonymous accounts, I suspect it would be #GDPR compliant, but then someone could stay off the list just by choosing a nick that looks like a real name.

@utzer@heluecht@amolith@Gargron@hypolite@snopyta@hankg@humanetech@aral@jeena@dansup@pluralistic@tblock@xantulon@informapirata@toplesstopics@roko
We also have to ask what happens in cases were someone uses a pseudonym for their acct but their pseudonym is well known publicly to associate to an individual in the EU? I have no idea, but would be interested in the answer. I suspect itโ€™s quite dicey territory.
Interesting I didn't think about this but am not sure if it would be an issue if the account is public.
@koherecoWatchdog Yes, it could turn into a #pillory . I guess the solution is that users individually can block other users but it is kept to them. Blocking whole instances is e.g. on #Friendica public, #Mastodon seem not to be public by itself about it, some instances don't show their blocklists. I also ready saw one that has its block list publishes on github. The counter-argument is transparency to other users where the federation isn't possible anymore so they can see it and know why their messages don't go out (or reach there). It already happened to me with some Mastodon instances where they don't publish that I had been blocked there.

Mark reshared this.

Itโ€™s a mess indeed. When a whole node is blocked, the blockade /should/ have transparency. Transparency in that scenario isnโ€™t a pillory that we care about since itโ€™s not targeting individuals. Blocking individuals is tricky. If Alice blocks Bob b/c heโ€™s bullying or stalking her, then Alice needs to block Bob w/out Bob knowing (thereโ€™s a Github thread on that).
I guess the solution is that users individually can block other users but it is kept to them.
This, yes! Perhaps such lists should be for individual users, applied for their own selves, not for instance admins. Not sure what is best...
What is the problem that isn't adequately handled by current technology (muting, blocking, imposed content warnings)?
Right so I do not want these list either to be honest. I am against banning people/instances at the instance level. But after many discussions on the fediverse I realized that many admins block other instances and users, and this is non-transparent. This idea with block lists is to address these non-transparent and "personal" decisions and to give users the power to quickly block spammers and bots.
How is this list to be regulated? By the number of votes? What if 99% of the submissions agree to ban a certain religion, or vegans, or economists who wear yellow shirts on Tuesday?
Because you make lists based on what you want to ban, like adblockers do. And it is up to the users what lists they want. There are adblocker lists about blocking ads and trackers, some block fb, twitter and the like, others porn and so forth. WHo decides? I do not know...who decides for adblockers?
Look I am the one who tries to fight against admins blocking entire servers or users. I had many discussions about this and I think it is wrong for an admin to do that. Users can do whatever they wish to with their online presence.

That being said adblockers block ads, many of whom are not dangerous to any computer, just to the mind (an annoyance). The fediverse will be more and more full of spammers and scammers and bots. We need some lists for those, at least. And the rest of the lists (porn, guns, whatever) are a good alternative to what it is happening now. If the lists are public one can at least see their name/instance there and do something about that. But now you are blind to it all.
Oh no am not sure how you interpreted my comment. Was in no way combative or anything like that ๐Ÿ˜€. The reason I made this post was exactly for this: to get reactions and discussions. You make some good points and I am absorbing them. All friendly, 100% ๐Ÿ˜€
This entry was edited (2 months ago)
I appreciate your desire for transparency, but there's a dark side to that, as well. Imagine such a set of lists where who is being blocked and who submitted the proposal to block them is publicly viewable.

Now image the person or people behind the blocked/banned website. They know exactly who is against them. Their target is perfectly described. Now comes the DDoSing, the doxing, the swatting, the complaint calls to employers, the endless harassment in other channels; or, if it's a corporation, a polite call from the legal department.

If I ran a server, the last thing I would do is make a public pronouncement about who I'm blocking and who told me to do so.

@tio@Gargron@amolith@aral@dansup@hankg@heluecht@humanetech@hypolite@informapirata@jeena@pluralistic@roko@snopyta@tblock@toplesstopics@utzer@xantulon
You can say the same about the adblockers and those who submit the entries. Is google, fb and otehr companies coming after these people? I don't think so. Also, I am talking mainly about lists for spammers, scammers and bots. Plus you can make it so that the submission is quite anonymous. I don't see this as an issue really.
Transparency is needed particularly when a node admin does the blocking. E.g. mastodon.social currently blocks mg@101010.pl & does so in a deceptive way that disguises the fact that the ban is done by mastodon.social not 101010.pl. Itโ€™s an abusive Reddit-style shadowban that keeps both sides of the ban in the dark.

@tblock@hypolite@pluralistic@aral@Gargron@dansup@humanetech@amolith@heluecht@informapirata@hankg@snopyta@roko@tio@toplesstopics@xantulon@utzer@jeena
Of course when a bully is banned, then transparency harms bully victims.

The fedi needs to treat bullies different than politically motivated censorship.

MG is a beneficial svc but mastodon.social makes a value judgement to censor a fedi service from the biggest swath of fedi users. Abuse by admins needs transparency

@tblock@hypolite@pluralistic@aral@Gargron@dansup@humanetech@amolith@heluecht@informapirata@hankg@snopyta@roko@tio@toplesstopics@xantulon@utzer@jeena
This comes down to the fundamental design of instances of many: everyone is equal but some are more equal than others. The admin of an instance of hundreds of thousands is, whether they like it or not or want to be or not, a de facto mini Zuckerberg. (1/2)

Rokosun reshared this.

I love that we have the fediverse but thatโ€™s why I see it as a stopgap. If we want to solve this problem from first principles, we should be looking to design systems that make a web of โ€œinstances of oneโ€ viable. (Keeping in mind that itโ€™s not a trivial problem to solve and is not going to happen overnight.)

Rokosun reshared this.

reminds me of a poll I have recently made. I think having personal instances is a good way to go. Although, not everyone would consider this feasible. Additionally, how would this be different to communities where the moderation is left to the user?

The poll I mention: https://saturno.com.ve/notice/AJIznfEXlDsYFiVZbM
users aren't prevented from running their own server with black jack and scripts. I don't see those fans of decentralization when it comes to hiring a server on hosting, paying for it and administrating it. it's amazing thing: people like "freedom" while they don't have to contribute.
Yes but how do users know when they need their own svr (or another svr)?

The problem is not so much that an admin can abuse their power; itโ€™s that when they choose to abuse their pwr itโ€™s invisible to the users. Admins need control but a fair & balanced system spotlights bad admin actions so users know to bounce & avoid poorly adminโ€™d nodes.

/cc (list shortened to just those whoโ€™ve participated)
@hypolite@heluecht@aral@humanetech@tio@sky_scat@xantulon
#Lemmy is an interesting case b/c the s/w includes a modlog, so users can review the modlog and check whether a moderator or admin is reasonable. But with Lemmy itโ€™s just an illusion b/c admins of the flagship instance (lemmy.ml) have actually been caught deleting modlogs when public attention is on an embarrassingly poor admin action.

We need to get those modlogs on a public blockchain ;)
@hypolite@heluecht@aral@humanetech@tio@sky_scat@xantulon
So consider this: Brussels police posted recovered stolen property on Facebook. Victims not on FB were unable to see if their stolen property was recovered, thus the police were only serving victims who use FB. When that story came to r/Brussels, Octave censored it.

You might say โ€œfair enough, heโ€™s the modโ€. But admins & moderators are empowered by their users & Reddit ensures that most users are unaware of power abuses.
@hypolite@heluecht@aral@humanetech@tio@sky_scat@xantulon
Yeah I do not think is a good idea to have such massive clusters.
Of course when a bully is banned, then transparency harms bully victims.
But if a user bans the bully, then there is no reason for the bully to know. Transparency means for instances. When instances ban, it should be transparent to all. What users do with their own accounts is their own private business. But when instances do these sort of things, and affect other users, then it should be automatically public.
I hadn't got time to look in all messages and the linked documents. But I can imagine some federated blocking. Means: some server administrators could cooperate here. For example: Administrator of server A, B and C share common views and agreed about their moderation policy. So every moderation done via one of them could be shared across these systems. And with each moderation action one could decide whether this was a local or distributed moderation activity.

I like it, because it is more decentralized and doesn't need a single server. Also it respects cultural differences the best.
This entry was edited (3 weeks ago)
If this were easy it wouldn't be an issue... What I'm wondering about the privacy/GDPR related issues, is if user names could be hashed and only the hash is transferred to the block list and comply with privacy concerns. The real issue is, one person's sarcasm is another person's insult. It requires a pattern of, call it, abuse, and that's hard to automate. Employers, and some license applications scan social media, and I can imagine it could be very damaging to show up on block lists because you speak up about a specific ideology. This already happens in far too many places now
Maybe the solution is to focus on spammers, bots. That would be a saner approach maybe so you dont mix in with all sorts of personal judgements and make the list more objective.
How do you separate dealing with spamming personal accounts and launder personal grievances towards personal accounts?
I do not have an exact answer. How are the ones behind adblockers or the new content blockers for youtube, dealing with these? Will they remove self promoting content from some websites? How about the lists of trackers, are they gonna remove the trackers that are there for the purpose of making a website work? idk...but these projects started somewhere and got better and better over time, and super useful at what they do.

Labeling certain accounts as spamming or bots, or whole instances, may not be far from what the above examples are doing.

3 people reshared this

By the way I want to mention that I really hate when admins block users/instances. They should only do that in extreme situations (like spam and bots). But I do understand that we need tools for this as well. Also you have to understand that if we do not have a better solution to this admins will continue to ban users/instances in a non-transparent way. Friendica at least publishes all such blocks publicly by default. But it needs to also send a notification to all users when a decision like this has happened.

So the discussion is very important. I see these adblock-like lists mostly geared for users and not admins.

Rokosun reshared this.

So on Friendica whenever admins add domains to the blocklist, it becomes public at /friendica url. Admins are also forced to explain the reason of the block.

Now I have asked them to automatically add a notification to all users of that instance when a domain is added or removed from the list - https://github.com/friendica/friendica/issues/11779 . This is a move in the right direction.

Friendica is leading the way to a great social platform in so many regards, and the develops, @Michael Vogel@Hypolite Petovan and @Tobias are so friendly and open to suggestions. I hope this never changes because it is the most important aspect of Friendica. I really feel comfortable talking to these guys and suggesting improvements for Friendica, without (hopefully) overwhelming or annoying them.

2 people reshared this

@Tio @utzer ~Friendica~ @Michael Vogel @Amolith @Eugen @Hypolite Petovan @Snopyta @Hank G @Humane Tech Now @ar.al๐ŸŒป @Jeena @dansup @Cory Doctorow's linkblog @TBlock @xantulon @Informa Pirata @Cleo McKee

So the last time we talked about this topic I explained how most of these censorship issues could be avoided by using peer to peer systems. But now I wanna take a look at the fediverse itself and see how we can make things better. I have an idea that I want to share with you all, because I think it could solve a lot of issues with our current method of moderation. I felt like it was better to make a separate post for it so explained it here - https://social.trom.tf/display/dbc8dc44-1562-f428-15c1-f3b204992293

2 people reshared this

Nice post. Some of it aligns to #FederatedModeration idea I posted on the other thread: https://lemmy.ml/post/60475

This makes all moderation activity native to the #Fediverse as federated #ActivityPub messages, which can serve the transparency part.

How you surface them in UI is app-specific. Will add your post to #Lemmy. CC @emacsen @csddumi
@Humane Tech Now @Emacsen @CSDUMMIโœ๏ธ๐Ÿ•Š๏ธ๐Ÿ›๏ธ @utzer ~Friendica~ @Michael Vogel @Tio @Amolith @Eugen @Hypolite Petovan @Snopyta @Hank G @ar.al๐ŸŒป @Jeena @dansup @Cory Doctorow's linkblog @TBlock @xantulon @Informa Pirata @Cleo McKee

I have made another post to explain about the social impacts of moderation & censorship - https://social.trom.tf/display/dbc8dc44-7762-f814-5ad9-208146996908

So far we've been talking only about the technical side of things, to see how we can improve moderation by giving more power and choice to the users. But looking at some of the replies I got on my last post I felt like maybe a lot of people don't understand why we're doing it in the first place, why should we give more power to the users over admins? So I wrote this post, to answer that question, but also to make people understand how much of a complex topic moderation actually is, it isn't just simply "blocking bad content".

2 people reshared this

Once again, great article! I'm not working on moderation now and getting info overloaded, so I'll add another reference to the Lemmy post.

But if I'd ever implement I'd create a Content Moderation domain model and ubiquitous language that concisely summarized the domain and business logic before starting any coding and to bring clarity to the great amount of input and requirements.

In other words I'd practice Domain Driven Design (Strategic DDD, to be precise).
โ‡ง