Hi #fediverse
Wanna brainstorm some what-ifs?
Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?
https://lemmy.ml/post/60475
Wanna brainstorm some what-ifs?
Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?
https://lemmy.ml/post/60475
reshared this
Gregory
in reply to smallcircles (Humane Tech Now) • • •smallcircles (Humane Tech Now)
in reply to Gregory • • •Also there would be more visibility to this time-consuming and under-appreciated job.
Maike
in reply to smallcircles (Humane Tech Now) • • •smallcircles (Humane Tech Now)
in reply to Maike • • •In Delegated Moderation a mod of Instance X might be trusted to do work for Instance Y and their help can be invoked when needed.
In Delegation-as-a-Service any fedizen can offer to do moderation work. It is competely decoupled from instances. But in this model you would need some mechanism to know whether you can trust someone offering their help. A simple "I vouch for this mod" given by fedizens, reputation system might do, at the start.. dunno.
Maike
in reply to smallcircles (Humane Tech Now) • • •smallcircles (Humane Tech Now)
in reply to Maike • • •For an instance it'd mean adhering to the Code of Conduct, but - as you say - probably also some more specific moderation guidelines need to be adhered to.
Tio
in reply to smallcircles (Humane Tech Now) • •Moderation quickly transforms into censorship. Instance admins should not decide for the rest of the users on their instance, what is a good or a bad content to see.
like this
smallcircles (Humane Tech Now), Esther Payne 🏴, LPS, Brett Sheffield (he/him), artyr3 and Open Science ✅ like this.
reshared this
activitypub group, smallcircles (Humane Tech Now), 里瓣来悲茶 and Esther Payne 🏴 reshared this.
KelsonV
in reply to Tio • • •activitypub group reshared this.
Tio
in reply to KelsonV • •Open Science ✅ likes this.
activitypub group reshared this.
Tio
in reply to KelsonV • •felix likes this.
activitypub group reshared this.
felix
in reply to Tio • • •@humanetech@activitypub@KelsonV
activitypub group reshared this.
Tio
in reply to felix • •Open Science ✅ likes this.
activitypub group reshared this.
ged
in reply to Tio • • •See @SocialCoop's code of conduct for example: https://wiki.social.coop/rules-and-bylaws/Code-of-conduct.html
It wouldn't make sense to argue about whether letting your users being harassed or threatened is good or not.
smallcircles (Humane Tech Now)
in reply to ged • • •Am I justified to allow only people that adhere to my CoC? I think I am. That automatically gives me the burden to monitor and moderate, which grows with no. of users.
I admire your open-minded admin approach, but it is not for everyone.
activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to smallcircles (Humane Tech Now) • • •Compare to real-world.. Say, I organized an interest group talking about dogs, and many non-members following + reacting to discussion.
Suddenly some guys join and start preaching a religious suicide cult. Or maybe just 'cats are better'. As organizer I'd say "Get lost, preach somewhere else". Censorship?
activitypub group reshared this.
Maike
in reply to smallcircles (Humane Tech Now) • • •There are enough "freeze peach" places where they can troll each other. 1/2
activitypub group reshared this.
Maike
in reply to Maike • • •And then THEY bring friends and the friends bring friends and they stop being cool and then you realize, oh shit, this is a Nazi bar now. And it's too late ....
https://www.reddit.com/r/TalesFromYourServer/comments/hsiisw/kicking_a_nazi_out_as_soon_as_they_walk_in/
2/2
activitypub group reshared this.
ged
in reply to Maike • • •Americans probably live with media and the memory of elders telling how they won the war.
Anyway, protecting your users from abuse, manipulation, coercion, violence, threats of violence/doxxing, or harassment isn't censorship. I repeat myself, but moderation isn't about suspending users who have an opinion you don't like. Just like with this Reddit post, it's about suspending users who degrade everyone's experience or may turn it into something that doesn't justify the maintenance costs.
vagabond
in reply to Tio • • •We take a fresh aproch to this by alloying content to flow to where it is wonted - "news" is not removed it is simply moved at the #OMN in this we try to build on the past indymedia expirence #indymediaback
reshared this
activitypub group and smallcircles (Humane Tech Now) reshared this.
Arne Babenhauserheide
in reply to Tio • • •In the Freenet project (where centralized moderation simply is no option) the answer was to propagate blocking between users in a transparent way. That way blocking disrupters scales better than disrupting. For more info see: https://www.draketo.de/english/freenet/friendly-communication-with-anonymity
activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to Arne Babenhauserheide • • •This allows people to get insight in metrics surrounding moderation action, while you can still have each and every fedizen to make their own individual decision whether or not to take action themselves.
activitypub group reshared this.
Arne Babenhauserheide
in reply to smallcircles (Humane Tech Now) • • •Then the optimizations needed so this scales to arbitrary size: https://www.draketo.de/english/freenet/deterministic-load-decentralized-spam-filter
Here’s some data if you want to test algorithms: https://figshare.com/articles/dataset/The_Freenet_social_trust_graph_extracted_from_the_Web_of_Trust/4725664
And some starting code of a more generic prototype for faster testing: https://hg.sr.ht/~arnebab/wispwot/
reshared this
activitypub group and smallcircles (Humane Tech Now) reshared this.
Arne Babenhauserheide
in reply to Arne Babenhauserheide • • •activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to Arne Babenhauserheide • • •activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to smallcircles (Humane Tech Now) • • •I am not able time-wise to follow-up on all of the topics I start, nor having concrete implementation plans on them (though sometimes I do).
The Lemmy and SocialHub spaces serve as idea archives in that way. Stuff waiting to be elaborated further.
activitypub group reshared this.
Arne Babenhauserheide
in reply to smallcircles (Humane Tech Now) • • •If people have questions about the math for scaling, it would be great if you could point them to me.
reshared this
activitypub group and smallcircles (Humane Tech Now) reshared this.
smallcircles (Humane Tech Now)
in reply to Arne Babenhauserheide • • •Your input may be invaluable to them. Here's the #SocialHub link:
https://socialhub.activitypub.rocks/t/federated-moderation-towards-delegated-moderation/1580
activitypub group reshared this.
Arne Babenhauserheide
in reply to smallcircles (Humane Tech Now) • • •activitypub group reshared this.
Tio
Unknown parent • •Open Science ✅ likes this.
activitypub group reshared this.
KelsonV
in reply to Tio • • •Consider someone who is using your server to send repeated insults, unwanted sexual advances, death threats, etc. to multiple other people. Or to reveal someone's private information. As an admin, refusing to take action against a malicious user of your site puts the burden on *multiple* recipients of the abuse to deal with it themselves.
That's not humane.
activitypub group reshared this.
Tio
in reply to KelsonV • •Open Science ✅ likes this.
activitypub group reshared this.
Zap
in reply to smallcircles (Humane Tech Now) • • •We used to have a rating service but it had some issues and we'll have to revisit that. For now we just let you figure out who you think you can trust.
smallcircles (Humane Tech Now) reshared this.
smallcircles (Humane Tech Now)
in reply to Zap • • •Zap
in reply to smallcircles (Humane Tech Now) • • •There's also a quick configuration for moderated public groups since this is the most popular use case. In this configuration everybody that joins the group is moderated by default until/unless you decide otherwise.
Somewhere there's also a tool for sending the incoming moderation notifications to any email address or list of addresses you choose. But darned if I can find it right now.
smallcircles (Humane Tech Now)
in reply to Zap • • •Note that this also aligns with the "Community has no Boundary" paradigm I'm discussing on #SocialHub where instances are abstracted away, and communities are more like the intricate social structures you see in the real world:
https://socialhub.activitypub.rocks/t/standardizing-on-a-common-community-domain-as-ap-extension/1353
And can be extended with e.g. Governance Policies of various kinds:
https://socialhub.activitypub.rocks/t/what-would-a-fediverse-governance-body-look-like/1497/47
smallcircles (Humane Tech Now)
Unknown parent • • •There's work in this area on #fediverse in @zap + #zot
See:
> You have the right to a permanent internet identity which is not associated with what server you are currently using and cannot be taken away from you by anybody, ever.
https://zotlabs.org/page/zot/zot+about
activitypub group reshared this.
pettter
in reply to smallcircles (Humane Tech Now) • • •For federated moderation, I urge you to have a look at the early days of IRC, and what happened there.
@matro@activitypub@zap
reshared this
activitypub group and smallcircles (Humane Tech Now) reshared this.
Holly Bnuuy Lotor :verified:
in reply to smallcircles (Humane Tech Now) • • •Content warning: my opinions
I don't know about the logistics of delegating moderation services - I think it's best if users just choose a server with trustworthy moderation, and migrate if that moderation proves to be bad or otherwise insufficient - but I do largely think that federating moderation actions to some extent would be a good thing.
I feel like there's not enough information easily and readily accessible to an admin about the instances that their server talks to. Being able to see from a glance which instances I trust block a given server (or explicitly allow, in the case of allowlists), maybe a sampling of a server's posts as well as the server's age and terms of use, and notifications for when an instance discovers a new server or an instance I trust has just blocked a server, would make federation a much more comfortable experience I feel.
smallcircles (Humane Tech Now)
in reply to pettter • • •> Humans and human judgement need to be in the loop at some point.
.. is something that need not be taken away in any more federated mechanism. I think it very important to keep this human aspect.
This, btw, is a strong point of the #fediverse where there's much more moderators than in traditional social media (which requires algorithms do the work, to scale tasks to billions of users).
reshared this
activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to smallcircles (Humane Tech Now) • • •Chartodon
in reply to smallcircles (Humane Tech Now) • • •https://www.solipsys.co.uk/Chartodon/106062541939299515.svg
Things may have changed since I started compiling that, and some things may have been inaccessible.
The chart will eventually be deleted, so if you'd like to keep it, make sure you download a copy.
Byron Torres
in reply to smallcircles (Humane Tech Now) • • •My thoughts: bad ideas as I see it.
> Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
It already exists. There is no need to add arbitrary power structures meta-types to a protocol dedicated to communication. Implementers already have trouble interpreting the semantics of AS types and AP behaviour. Moderation belongs to a different layer.
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-block
> Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Then I cannot trust a single instance. I want to trust a sin... show more
My thoughts: bad ideas as I see it.
> Federated Moderation: What if Moderation was an #activitypub extension and moderation actions would federate to ease the life of mods + admins?
It already exists. There is no need to add arbitrary power structures meta-types to a protocol dedicated to communication. Implementers already have trouble interpreting the semantics of AS types and AP behaviour. Moderation belongs to a different layer.
https://www.w3.org/TR/activitystreams-vocabulary/#dfn-block
> Delegated Moderation: What if moderators weren't bound to instances, and could just jump in on another instance to help do the work?
Then I cannot trust a single instance. I want to trust a single, stable, mostly immutable group of moderators from my instance. I don't want some "moderation upper class" overseeing the network and trading positions, most especially with incentive (reward is not intrinsic to motivation, self-esteem or character). "Instances" currently provide clear boundaries within the network, and the freedom rests on users to reside wherever they wish. This current paradigm is optimal, and deviations as I see it are only for the worse.
> Moderation-as-a-Service: What if mods provided their services via federated @activitypub models, gained trust and reputation based on your feedback?
This doesn't make sense. There is no "one-true" moderation policy to rate against. A specific user will want a specific moderation policy, and will choose an instance who's moderation reflects it. This is and should continue to be an individual's choice. "Reputation" is reserved for Wikipedia-style structures with top-down hierarchy.
A lot of this is of the cuff, and I know you make more detailed and nuanced points elsewhere, so feel free to agree or disagree, clarify, correct, etc.
reshared this
activitypub group and smallcircles (Humane Tech Now) reshared this.
Byron Torres
in reply to Byron Torres • • •smallcircles (Humane Tech Now)
in reply to Byron Torres • • •Moderation is too much out of sight of fedizens, unthankful work. But it is vital for fedi to not turn into a toxic hellhole. It is fedi's USP.
By making it part of fedi (as vocab extension, not core standard) it gets the appreciation + visibility it deserves. Makes it easier to find mods / offer incentives to help.
Manual decisions / onboarding remain unchanged. Mods need to follow CoC's always.
No "upper class", implicit -> explicit.
activitypub group reshared this.
smallcircles (Humane Tech Now)
in reply to smallcircles (Humane Tech Now) • • •reynir
in reply to pettter • • •reshared this
smallcircles (Humane Tech Now) reshared this.