Skip to main content


Hi #fediverse

Wanna brainstorm some what-ifs?

Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?

Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?

Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?

https://lemmy.ml/post/60475

reshared this

in reply to smallcircles (Humane Tech Now)

No, the fact that moderation is fully decentralized, and I'd say entirely optional, is a feature, not a bug. There's no need to fix it. There is a need for robust spam filters though because that problem is only a matter of time.
in reply to Gregory

in my idea moderation is still fully decentralized but it becomes part of the fabric of Fediverse itself, instead of being an instance by instance app-specific thing (an app can of course still ignore implementing it, choose their own way).

Also there would be more visibility to this time-consuming and under-appreciated job.
in reply to smallcircles (Humane Tech Now)

you mean a mod team with some mods available all the time to fedizens having an issue? How should they be chosen, by collective vote? What rights should they have and how would their actions be controlled (quis custodiet ipsos custodes)?
in reply to Maike

no, this is not exactly what I mean.

In Delegated Moderation a mod of Instance X might be trusted to do work for Instance Y and their help can be invoked when needed.

In Delegation-as-a-Service any fedizen can offer to do moderation work. It is competely decoupled from instances. But in this model you would need some mechanism to know whether you can trust someone offering their help. A simple "I vouch for this mod" given by fedizens, reputation system might do, at the start.. dunno.
in reply to smallcircles (Humane Tech Now)

so instance admins would assign somebody out of a pool of volunteers to help with moderation and thus determine what rights these people should have on that specific instance? Sounds feasible to me. Some guidelines or code of conduct should exist besides the reputation system.
in reply to Maike

yes, that is the idea.

For an instance it'd mean adhering to the Code of Conduct, but - as you say - probably also some more specific moderation guidelines need to be adhered to.
in reply to smallcircles (Humane Tech Now)

I think moderation is a very bad idea overall. Control in users' hands is a good idea tho. If some posts are "bad", they are so for some people and not for others. Users already have the ability to ignore/block such users/content. The only issue is that instance admins (like me) may be faced with this bad content on their instance. And the solution to that, I'd say, is to perhaps hide content from local and global feed, but do not ban/block users or content generally. If others want to see that bad content via a direct discovery process like search or a link and such, they should.

Moderation quickly transforms into censorship. Instance admins should not decide for the rest of the users on their instance, what is a good or a bad content to see.
in reply to Tio

So if someone is using your instance to harass people on another instance, you feel you shouldn't take action because they're only harassing some people and not others? Or are you using some other definition of moderation than I am?

activitypub group reshared this.

in reply to KelsonV

How can some users harass others when "others" can block them?

activitypub group reshared this.

in reply to KelsonV

Look at it this way: I can get harassed over email too. Anyone can send me an email. But I would not want gmail or any email provider to try and "defend" me against these harassers because I then put a lot of trust into a few individuals to decide what is good or bad for me. I would much rather prefer to have control over this and mark as spam any harassing email address so I block them on my end. Like a spam filter that I opt in to use and can have control over its rules.

activitypub group reshared this.

in reply to Tio

thats you, but other people like to have spam protection and possibly other filters. Nothing wrong with that, everyone can use what they prefer.

@humanetech@activitypub@KelsonV

activitypub group reshared this.

in reply to felix

For sure. Agree. But I'd like for me to opt in for such a filter. Imagine if Firefox or Chrome decide what websites are ok for millions of users and what are not. This is a slippery slope and they may even be doing that to a certain degree.

activitypub group reshared this.

in reply to Tio

@liaizon@grishka@maikek The point with moderation isn't to hide posts that you dislike but to protect your users from actively harmful people.
See @SocialCoop's code of conduct for example: https://wiki.social.coop/rules-and-bylaws/Code-of-conduct.html

It wouldn't make sense to argue about whether letting your users being harassed or threatened is good or not.
in reply to ged

In addition to what others are saying, there's also the perspective of the instance owner. Say, I have a fedi server with spare capacity. Out of the kindness of my heart I open it to others.

Am I justified to allow only people that adhere to my CoC? I think I am. That automatically gives me the burden to monitor and moderate, which grows with no. of users.

I admire your open-minded admin approach, but it is not for everyone.

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

It's not censorship that easily, esp. if ppl have the choice to go elsewhere to raise their voice, still part of the #fediverse

Compare to real-world.. Say, I organized an interest group talking about dogs, and many non-members following + reacting to discussion.

Suddenly some guys join and start preaching a religious suicide cult. Or maybe just 'cats are better'. As organizer I'd say "Get lost, preach somewhere else". Censorship?

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

I am all for censorship. If it is transparent. ppl who do not adhere to the CoC on an instance, eg harass others or post illegal, insulting content, should be warned off and if they don't stop, kick them out immediately.
There are enough "freeze peach" places where they can troll each other. 1/2

activitypub group reshared this.

in reply to Maike

Remember the barman? ... "you have to nip it in the bud immediately. These guys come in and it's always a nice, polite one. ... And then they become a regular and after awhile they bring a friend. And that dude is cool too.
And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late ....
https://www.reddit.com/r/TalesFromYourServer/comments/hsiisw/kicking_a_nazi_out_as_soon_as_they_walk_in/
2/2

activitypub group reshared this.

in reply to Maike

I don't think there's such a thing as transparent censorship, but also I don't think Americans live with media and the memory of elders having lived occupied by Nazis, with roundups and Jewish people parked in an overcrowded stadium, survivors saying they remembered the smell while watching the movie.
Americans probably live with media and the memory of elders telling how they won the war.

Anyway, protecting your users from abuse, manipulation, coercion, violence, threats of violence/doxxing, or harassment isn't censorship. I repeat myself, but moderation isn't about suspending users who have an opinion you don't like. Just like with this Reddit post, it's about suspending users who degrade everyone's experience or may turn it into something that doesn't justify the maintenance costs.
in reply to Tio

This is the model that run on the federated indymedia network for 10 years. This models limitations was one of the things that ended the project with ripping and tearing...

We take a fresh aproch to this by alloying content to flow to where it is wonted - "news" is not removed it is simply moved at the #OMN in this we try to build on the past indymedia expirence #indymediaback

reshared this

in reply to Tio

The challenge with moderation is that disrupting communication often scales better than individual blocking.

In the Freenet project (where centralized moderation simply is no option) the answer was to propagate blocking between users in a transparent way. That way blocking disrupters scales better than disrupting. For more info see: https://www.draketo.de/english/freenet/friendly-communication-with-anonymity

activitypub group reshared this.

in reply to Arne Babenhauserheide

Yes, this is interesting, and I think it matches the first part of the the Lemmy post about Federating Moderation.

This allows people to get insight in metrics surrounding moderation action, while you can still have each and every fedizen to make their own individual decision whether or not to take action themselves.

activitypub group reshared this.

in reply to Arne Babenhauserheide

(the prototype implementation at the end is built in a way that would be suitable for federation, because it can work with a shared database that only has different entry points to get your personal view of the trust-graph. It is far from finished, though)

activitypub group reshared this.

in reply to Arne Babenhauserheide

Are you on Lemmy, or SocialHub? Would be great to collect the info there, in more forum-like conversation threads.

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

In the #FediverseFutures initiative I am aiming for more creative brainstorming processes to be unleashed and collected for anyone to jump in on, if it has their passion.

I am not able time-wise to follow-up on all of the topics I start, nor having concrete implementation plans on them (though sometimes I do).

The Lemmy and SocialHub spaces serve as idea archives in that way. Stuff waiting to be elaborated further.

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

I’m not there, no. Feel free to copy over my posts here. I hope they help people tackle these issues, because global-scale moderation without centralized control is one of the huge tasks ahead of us — a task that was mostly ignored in the Clearnet (there were underpaid moderators to burn out after all) but tackled within the Freenet Project more than a decade ago.

If people have questions about the math for scaling, it would be great if you could point them to me.

reshared this

in reply to Arne Babenhauserheide

You should have a look at the SocialHub topic. It contains two links to other topics where researchers - one being @robertwgehl - announce they are investigating Moderation in decentralized (fedi) contexts.

Your input may be invaluable to them. Here's the #SocialHub link:

https://socialhub.activitypub.rocks/t/federated-moderation-towards-delegated-moderation/1580

activitypub group reshared this.

Unknown parent

Tio
Well, it's still harassment the first time.
True. That's how you detect it by "seeing" it. But I'd much rather prefer to let the user have the power to block the harasser than me, the instance owner, to do that. I cannot solve all of these disputes + as said above, it gives me too much power and I don't want to abuse it.

activitypub group reshared this.

in reply to Tio

You're assuming a 1-on-1 scenario.

Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people. Or to reveal someone's private information. As an admin, refusing to take action against a malicious user of your site puts the burden on *multiple* recipients of the abuse to deal with it themselves.

That's not humane.

activitypub group reshared this.

in reply to KelsonV

Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people.
I would assume such situations would be rare and is not worth sacrificing the freedom of expression overall. In my view at least. As said, emails can also be abused. And on the fediverse, if I block user X for being so "evil", then user X can make an account on another instance and keep on abusing people. The whack-a-mole game begins and I do not think we will win. But if it is easy for people to block others and apply all kinds of filters, then such situations should diminish.

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

You mean like Zap? We've had this for a few years now, including the delegated moderators (which we've had much, much longer)... it doesn't *require* any extensions to activitypub, but it's nice to provide a notice that posts/comments might not show up immediately.

We used to have a rating service but it had some issues and we'll have to revisit that. For now we just let you figure out who you think you can trust.
in reply to Zap

that is very interesting and I didn't know that. Zap/zot are doing many great things. Is there documentation to refer to? I'd like to add to the Lemmy discussion.
in reply to smallcircles (Humane Tech Now)

Moderation is an option selected by the channel owner. The site admin is not involved. You can moderate everybody, anybody, or nobody. You can also grant admin access for your channel and content to anybody you wish. On our own platform this is automatic and connecting from elsewhere to your site as channel admin doesn't require any authentication interaction. There's an app called 'Guest Pass' if you want to give admin rights over your content to somebody on Mastodon (for instance) or just somebody with an email address.

There's also a quick configuration for moderated public groups since this is the most popular use case. In this configuration everybody that joins the group is moderated by default until/unless you decide otherwise.

Somewhere there's also a tool for sending the incoming moderation notifications to any email address or list of addresses you choose. But darned if I can find it right now.
in reply to Zap

I like this model, and it makes total sense (though the admin should have control of what channels are allowed, but probably is).

Note that this also aligns with the "Community has no Boundary" paradigm I'm discussing on #SocialHub where instances are abstracted away, and communities are more like the intricate social structures you see in the real world:

https://socialhub.activitypub.rocks/t/standardizing-on-a-common-community-domain-as-ap-extension/1353

And can be extended with e.g. Governance Policies of various kinds:

https://socialhub.activitypub.rocks/t/what-would-a-fediverse-governance-body-look-like/1497/47
Unknown parent

Yes, you are quite right, and it is an important topic. Tackling the uses cases of decentralized identity in proper open-standards based ways is something that the entire decentralization movement is eagerly awaiting.

There's work in this area on #fediverse in @zap + #zot

See:

> You have the right to a permanent internet identity which is not associated with what server you are currently using and cannot be taken away from you by anybody, ever.

https://zotlabs.org/page/zot/zot+about
@Zap

activitypub group reshared this.

in reply to smallcircles (Humane Tech Now)

I am not convinced server-less identity is something that can be solved in a reasonable way, honestly. Or at least, it will bring with it unavoidable problem such as it being impossible to do recovery. Humans and human judgement need to be in the loop at some point.

For federated moderation, I urge you to have a look at the early days of IRC, and what happened there.
@matro@activitypub@zap

reshared this

in reply to smallcircles (Humane Tech Now)

Content warning: my opinions

in reply to pettter

Yes you are right, similar responses are on the thread, and..

> Humans and human judgement need to be in the loop at some point.

.. is something that need not be taken away in any more federated mechanism. I think it very important to keep this human aspect.

This, btw, is a strong point of the #fediverse where there's much more moderators than in traditional social media (which requires algorithms do the work, to scale tasks to billions of users).

reshared this

in reply to smallcircles (Humane Tech Now)

Your chart is ready, and can be found here:

https://www.solipsys.co.uk/Chartodon/106062541939299515.svg

Things may have changed since I started compiling that, and some things may have been inaccessible.

The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.
in reply to smallcircles (Humane Tech Now)

reshared this

in reply to Byron Torres

Mess with the fabric of the Fediverse, and it will mess with you
in reply to Byron Torres

all good points. Thx for reply!

Moderation is too much out of sight of fedizens, unthankful work. But it is vital for fedi to not turn into a toxic hellhole. It is fedi's USP.

By making it part of fedi (as vocab extension, not core standard) it gets the appreciation + visibility it deserves. Makes it easier to find mods / offer incentives to help.

Manual decisions / onboarding remain unchanged. Mods need to follow CoC's always.

No "upper class", implicit -> explicit.

activitypub group reshared this.

in reply to pettter

Where's a good place to read more about early days IRC and moderation?

reshared this