The social impacts of moderation & censorship
This post is related to these previous discussions we've had about moderation:
social.trom.tf/display/dbc8dc4…
social.trom.tf/display/dbc8dc4…
So in those above posts we have shared some ideas regarding how we can improve moderation on the fediverse. Those posts were looking at moderation from a technical point of view. But after seeing some of the responses I got for my last post, I feel like I should write about the social impacts of it, because for some people moderation is a sensitive topic and TBH I don't think many people understand how complex this topic actually is.
Types of moderation issues
Usually when people say bad moderation, it can be of two types:
- Moderation is not strict enough and there are still a lot of bad content that needs to be removed.
- Moderation is too strict and content that shouldn't have been banned is being banned, leading to censorship.
Types of social media users
Many people disagree on how strict moderation should be, and what I noticed is that most of these disagreements come from 2 types of users who want different things from a social media:
Type 1 users
These are people who don't want to see or interact with others who disagree with them. They might be doing this for 2 reasons:
- They want to create a safe space for themselves. For example, if someone's getting bullied online due to their political believes or race or whatever, then they might want to block out everyone who might be a potential threat. These include vulnerable people, like those going through depression and such, so they might need more stricter blocking than others.
- They don't want to hear from people who disagree with them. For example, someone might wanna block out all anti-mask people because they don't agree with them and don't want to argue about it.
Type 2 users
These are users who want to reach a lot of people via social media, including ones who disagree with them. They might be doing this for 2 reasons:
- They want others to hear them. For example, an environmentalist would want to be heard so that more people become aware of its importance.
- They want to hear what others are saying and change/improve their own views & knowledge about the world.
So Type 1 users generally want strict blocking while Type 2 users prefer less strict blocking, keep this in mind.
What content should get blocked?
So in the previous post I made, some people object to the idea of a "Soft block" and this is their reasoning:
If my friend is on a poorly moderated instance, I want to get them off of that instance, not make an exception to my block just for them. With your system, there is no reason for people to leave bad instances because they can still interact with their friends, and there is no pressure on admins to actually moderate their instances, either. You make blocks completely toothless by having these exceptions.
I actually thought blocks were used for creating a safe space for the users of that instance, but reading this makes me realize that some people also see it as a way to punish other instance admins for not moderating properly. There are of course a lot of issues with this mindset, but in order to understand it you have to know a few things.
What should an admin block in order to not become a "poorly moderated instance"? I'm simply asking here which type of content should be blocked. So the usual answer to this question is to block all of the socially unacceptable content. So if someone's being racist with people, then that person should be blocked because its not socially acceptable, so far so good. But now if you look at the history of what was considered as "socially acceptable" or not, then it gets real confusing. Because in the past it used to be socially unacceptable for women to vote, but at the same time slavery was considered okay! And even just in the past decades or so, a lot has been changed as to what we consider socially acceptable. There are still some countries today where homosexuality is considered socially unacceptable, and they're having to fight for their rights. So what people consider as socially acceptable will change from time to time and place to place. And I'm sure this will change in the future too, because there might be some things that are considered normal today, but we'll realize later that we've been wrong about it. This is not a wild claim, we've been wrong in the past many times and it'll happen again.
If Fediverse existed in the past - social norms changes everything
If I want to explain this to you then we'd have to talk about the past, what people had to go through in order to change the norm, because if they didn't change that then we'll be still living in a society where sexism and homophobia is norm. What would have actually happened if fediverse were to exist at the time where sexism was the norm? Moderators would actually be blocking feminist instances from federating, because in their own sense of morality that seems like the right thing to do. But the interesting thing here is, how will the users of a feminist instance react to getting blocked? So Type 1 users of the instance will say:
"If they're blocking us for being feminist then we don't need to talk to them anymore, they're a bunch of sexists anyway. And users who really do care about feminism will move away from that instance. In fact, our admin should have blocked that instance way before they blocked us, because some of our members have been harassed by the sexists on that instance"
So Type 1 users are not directly affected by the censorship, and they are criticizing their admin for not blocking the big instance long ago. But Type 2 users will be quite upset over this, because they actually want more people to read their posts that explains the discrimination they face in society, to spread feminist ideas as much as possible, and getting blocked by big instances mean they lose a lot of their followers and will also limit their exposure to others. So now if these Type 2 users want to get their followers back, there are 2 ways for them to do so:
- They can try to tell their followers to move to a different instance, maybe to the feminist instance they're currently on. And this seems like the right thing to do because they were unfairly blocked. Now I just wanted to ask you this, if your instance blocked mine for unfair reasons like this, then would you be willing to switch instances for me? TBH I don't think anyone would do that for me, maybe some of my very close friends may do that, but that's it. So yeah, switching instances is not as easy as people make it sound like, if it was then this wouldn't be an issue.
- Type 2 users can't get all of their followers to move for them, so now their only option is to move themselves to a different instance. This is not a good choice because it sends the message that what the other admin is doing is okay, which it isn't. So keep that in mind. Now if their aim is to get their followers back then they'll have to move to an instance that federate with this big instance that blocked them. And by now we know that all of the feminist instances (if there exist any) will be blocked already, so they'll have to go with a generic instance. But after this move is happened, it'll put the admin of that generic instance in a hard position. Because if blocks are used as a way to punish instance admins for not moderating properly, then this admin will of course be scared of getting blocked by big instances. Remember that feminism was considered socially unacceptable at the time, so if the admin don't kick out these feminists then that instance will be considered a "bad instance" and it'll get blocked out of fedi.
Social structure
If you notice the above example, you'll see that Type 1 users don't have a problem with censorship, even when someone's blocking them. This is because they don't really want their posts reaching people who disagree with them, all they want is a safe space devoid of haters that harass them, and this is understandable when you're part of a marginalized group. So because of these, Type 1 users don't see any issue with strict blocking because they're not directly affected by any censorship, so they might be unaware of its long term consequences in our society. On the other hand, Type 2 users such as activists want their posts to reach as many people as possible, they want their voice to be heard, but as a consequence they can also get a lot of harassment from haters, usually more than what Type 1 users get because of their extended reach. So in that case, Type 2 users could actually empathize with Type 1 users too. And the moderation tools available to users are very helpful for both people.
Type 2 users play an important role in changing social norms, because you can't create much of a social impact without reaching people who disagree with you. All activists are Type 2 users.
Now instead of focusing on the feminist instance, let's look at the social structure of the entire fediverse network:
- Type 1 users in support of feminism - These users will always be on feminist instances and they encourage their admins to block out all the other instances that isn't in support of feminism.
- Type 2 users in support of feminism - These users don't want their admins to block out too many instances because it'll reduce their ability to reach more people and spread feminist ideas.
- Type 1 users not in support of feminism - These users will stay away from feminist instances and they encourage their admins to block out all the feminist instances.
- Type 2 users not in support of feminism - These users don't want their admins to block out too many instances because it'll reduce their ability to reach more people and spread sexist ideas.
So when Type 1 users ask for stricter blocking, they could be trying to create a safe space for feminists, but they could also be trying to censor feminist content and create an echo chamber for sexists. And while Type 2 users ask for less strict blocking, they could be trying to spread feminist ideas, but they could also be trying to spread sexist ideas. This is why moderation is such a complicated issue, because there are 4 types of users who all want different things, and an admin will never be able to satisfy them all. In the case of that big instance that blocked the feminist instance, that decision was in favor of Type 1 sexist users, all of the other groups will be annoyed by this decision, except maybe Type 1 users because they don't care about censorship. Long before that, when the admin of that feminist instance chose not to block the big instance, that decision was in favor of Type 2 feminist users, Type 1 feminist users on the instance was annoyed by this because they prefer strict blocking.
The real problem
Maybe you got confused by the complex social structure of it all, but don't let that deviate you from the real problem here, if all of these users could choose for themselves what type of content they wanna block or not then this entire issue wouldn't exist. You see, the real issue here is that admins are making the decision for all users, and by doing that they are basically imposing their own sense of morality on others. Usually what will end up happening is that most admins will go along with the social norm, because that's how they can satisfy the most number of users. And so if sexism was the norm at the time, then feminism will be considered socially unacceptable and will get blocked.
All of the ideas I've shared before, like the concept of "Soft blocking" and the use of "Blocklists", they were all attempts at solving the real problem here, and that is to give the choice to the user itself, let them choose what they wanna block. If soft blocking was the default, then those Type 2 feminist users would've been able to get their followers back without having to switch instances. And if blocklists were a thing then those Type 1 feminists would've been able to protect themselves from harassment by subscribing to a blocklist that gets rid of all sexist users. Giving more choice to users is the sane way to solve this problem, because users know what they want better than their admins do. Letting the users choose for themselves is also good for moderators/admins, because now they don't have to make decisions on behalf of everyone and worry about offending people. If some users on an instance want stricter blocking but others want less strict blocking, currently the only thing moderators can do is to choose between these and risk offending others, but if blocklists were a thing then the admin could recommend users to apply a stricter or less strict blocklist depending on what they want. And if such a blocklist doesn't exist then the admin can make one themselves, remember that anyone can easily make a blocklist and publish it online for others to use. If you look at how the blocklists for ad-blockers are made today you'll see that its maintained by many people and anyone can contribute to the list, also because most of these are managed using git repos every decision they make along the way is public and transparent. Can you expect the same level of quality blocking when a few admins or moderators are doing all the work for you? Blocklists are just better in so many ways.
A real world example
I've talked a lot about the past, but now I want to give you a real world example that's ongoing. @Cleo of Topless Topics is someone who had to deal with a lot of censorship because they question social norms. She is trying to normalize non-sexual nudity, and because nudity is considered as "socially unacceptable" in our society she gets censored everywhere. Speaking of that, did you know that men were once arrested for baring their chests on the beach? Here is an article that explains it all and how men fought for the right to go topless - washingtonpost.com/history/201…
Today if we go to the beach we see lots of topless men, but its not considered socially unacceptable or looked at in a sexual way. But if a women shows her bare chest, maybe even to feed her baby, then that is not considered okay. This is the issue that she's trying to address.
But of course, she gets blocked everywhere, so much so that she even has a page on her website to showcase screenshots of every time she got banned by different platforms - toplesstopics.org/banned/
Going through that page you'll understand that her content is being disproportionately censored compared to other nude content. And one of the reason why this happens is because her content gets false reported a lot by what she describes as her "Dedicated Cabal of Haters". You see, she's not only having to go through censorship but she also faces a lot of harassment from haters, all because she tried to question a social norm. And this is interesting, because the job of moderators is to stop people from harassing others, but in this case her haters are making use of moderators to harass her. A perfect example of how moderation can be a double edged sword.
She didn't want to upload her videos to porn sites because she isn't making porn or any sexual content. And if you look at federated platforms like Peertube, then instances that allows nudity also allow all kinds of crap like pseudoscience, conspiracies, etc, so this is not a nice place to be. But finally she found @Tio who had a well maintained Peertube instance devoid of all the crap, and he agreed to host her videos there. So here's her channel now - videos.trom.tf/c/toplesstopics
But now if we were to spread this mindset that we should punish other instance admins for not moderating properly, then this peertube instance might get blocked simply because there is one person there that questions social norms. So is it a "poorly moderated instance" just because it has nude videos? Now you see how subjective these things are, because different people will have different opinions about what type of content should get blocked, so allowing users to choose for themselves is the best we can do here.
And BTW, I also wanted to point out that content warnings are not a solution here, the whole point is to normalize non-sexual nudity, so putting content warnings for it works against that goal. It is like asking users to put content warnings when writing about homosexuality or something like that, it goes against the whole point. On her mastodon account she sometimes put content warnings like "heinous nonsexual female nipples" to show how ridiculous it is.
Addendum
After publishing this post, the follow up conversations made me realize that I missed one important point. While I've made various suggestions to improve moderation like the idea of "Soft blocking" and of "Blocklists", I just forgot about a much more simple feature that already exists on most fediverse platforms. If you're worried about things like online bullying or just want a more strict protection, then one thing you can do is to change your post visibility to "followers only", this along with manual approval of followers would mean that no one can see or interact with your posts unless you manually approve them as your follower. You can find more info on how to do that here - mstdn.social/@feditips/1072085…
Platforms like Friendica would also allow you to put people into different groups and then when you write a post you can make it only visible to the members of that group. This will be very useful if you have a lot of followers but still want to restrict who can see some of your posts.
So instead of relying on moderators to filter out the bad guys from the entire network, you're just filtering in good guys and only they can see your posts. This will of course be much more strict than relying on moderators or a blocklist, and can be very useful for those who want that level of protection.
Currently this is an opt-in feature, but it would be nice if the admin of an instance can make this setting default for all their users, so that all posts will be private by default unless you opt out. If an admin feels like their users need stricter protection, then having this option can be very helpful.
An adblock-like solution for the fediverse. Thoughts?
Hey people on the fediverse, I have a serious question/proposal. I talked about this in the past, but I want to create a thread about it and I will tag a few...social.trom.tf
Tio
in reply to Rokosun • •Really great post! I am happy to see you express yourself in more detail and have a more strong input. You made many great points.
Here's the thing: when I heard about the Fediverse I thought "Oh that's great from two main perspectives". One: that you can create your own and control what's on your tiny island, and second that it is decentralized so you can connect your island with other.
I thought it is for sure enough control to be able to have your own fancy island with the people you want there. Create an instance and then you be the master and boss of whoever makes an account there, whoever comes and lives on your island. Good job, boss! You feel that power? Sweet, ha!?
But then I realized these island owners want to control what the inhabitants watch on their own TV's in their own home. And they start to ban access to outside islands/information. And that to me is a dangerous attitude, despite the island owners arguing that they do these to protect the inhabitants. Hmm I have a taste of a chinese accent in my mouth now...wonder why....
The instance adm
... show moreReally great post! I am happy to see you express yourself in more detail and have a more strong input. You made many great points.
Here's the thing: when I heard about the Fediverse I thought "Oh that's great from two main perspectives". One: that you can create your own and control what's on your tiny island, and second that it is decentralized so you can connect your island with other.
I thought it is for sure enough control to be able to have your own fancy island with the people you want there. Create an instance and then you be the master and boss of whoever makes an account there, whoever comes and lives on your island. Good job, boss! You feel that power? Sweet, ha!?
But then I realized these island owners want to control what the inhabitants watch on their own TV's in their own home. And they start to ban access to outside islands/information. And that to me is a dangerous attitude, despite the island owners arguing that they do these to protect the inhabitants. Hmm I have a taste of a chinese accent in my mouth now...wonder why....
The instance admins can control who creates an account there and remove anyone who is not welcomed on the platform. Users also have tremendous power and can block anyone. Now please, stop at this if you want a sane network.
They only have 3 main arguments for their "moderating" attitude when it comes to the other instances:
1. The Global Feed. Humans from other instances have their posts displayed on MY OWN instance. Publicly!
Ok! Then why not have a feature that disables the global timelines for your own instance? Even better simply ban others from the global timeline display and promote whoever you want. So the global timeline is a filtered one. And yet your users can have the freedom to access any other content and other users. However they find it.
2. My users are interacting with the outside content, which means that outside content is shown privately for those users ON MY INSTANCE, and thus I temporarily store that filthy content on my own server!
Since this content is private for those engaging with it, and temporary, then it is a sane compromise to accept. Search engines, crawlers, whatever, they all do this. It is the internet after all. Make any outside content unlisted/private if you wish to, and can only be seen by the users of your instance that interact with it. Sounds like a simple solution to me.
3. My users are highly sensitive and they cannot take care of themselves! They NEED ME, the admin, to block those trolls and ugly human souls!
Although it is nice when some admins want to care about their fellow users, it is as bad as protecting your own child from ever falling, ever being upset, ever being contradicted...helicopter parents are not the way to go. It has repercussions.
Users already have a ton of tools they can use to protect themselves from the trolls. In Friendica for instance by default you don't see anything. Unless you start to follow others and you'll only see their posts. So you see what you want to see. And you have the option to expand that to seeing posts where your friends interacted or posts of friends of friends and so on. You are in control from the get go. No one can contact you unless you allow for that. So what in the world are we talking about here!?
It is like your kid has a protective bubble around him and no one can touch him except when he expands his bubble. And he has a sword and a machine gun so he can stop anyone from harassing him. And yet you, the parent, feel the need to control what your kid does...this is ridiculous.
Can anyone have any better arguments for this strict moderation attitude that cannot be easily solved with technical solutions?
Rokosun likes this.
Rokosun
in reply to Tio • •@Tio @Cleo of Topless Topics
About your last point there, I can actually see why some people might want someone else to moderate for them, because doing it all manually is a PITA. And like someone pointed out in the previous discussion, its not just users seeing racist people or whatever, but racist people if they see these users then they start to attack them, and they'll boost this post so that other racist people can find them too, so they all come at once and flood you. I've been told that this happened to someone, so yeah, it can be way worse.
But even if you want someone else to moderate for you, IMO the saner way to do it would be to ask that person to make a blocklist for you instead of directly doing the moderation for you. Because this way, you do have a say in what content gets blocked, you do have a say in who makes the blocklist for you, and t
... show more@Tio @Cleo of Topless Topics
About your last point there, I can actually see why some people might want someone else to moderate for them, because doing it all manually is a PITA. And like someone pointed out in the previous discussion, its not just users seeing racist people or whatever, but racist people if they see these users then they start to attack them, and they'll boost this post so that other racist people can find them too, so they all come at once and flood you. I've been told that this happened to someone, so yeah, it can be way worse.
But even if you want someone else to moderate for you, IMO the saner way to do it would be to ask that person to make a blocklist for you instead of directly doing the moderation for you. Because this way, you do have a say in what content gets blocked, you do have a say in who makes the blocklist for you, and then the blocklist itself is public and transparent. And like I mentioned in my post above, a blocklist maintained by multiple people and one that anyone can contribute to, will be many times more powerful than the protection you can get from letting a single person do the moderation for you.
So even though I do understand why some people might need help with blocking bad content, blocklists are much more effective and better in all regards compared to letting the admin do that for you. And have you also thought about the fact that moderation is extra work for the ones doing it? So if there are 100 instances on fedi then that 100 instances will need their own separate moderators. If we made universal blocklists for fedi that applies to all of these instances then it would also reduce a lot of manual labor.
Tio
in reply to Rokosun • •Hold on, if you limit your access to only your friends, it means your friends make your post/profile discoverable, right? So you didn't choose your friends well. That aside, a technical solution is again reasonable here: set up so only your friends can interact with your post. So you won't see any comments from anyone that is not your friend. Right? This should be default.
But again I do not see any argument that cannot be fixed with a technical solution. And thus, no more debates or "what ifs" :)
Rokosun likes this.
Rokosun
in reply to Tio • •@Tio
Yes, having more control over who can see your posts can be very useful if you're worried about things like online bullying, I never thought of it before.
On Mastodon you can make it so that only your followers can see your posts, Friendica also has this option but they also allow more granular controls over it, so if you only want a specific group of people to see the post then you could do that. I haven't tested this feature, so I don't know how well it works across different platforms, but Friendica does allow you to use more privacy controls.
@Cleo of Topless Topics
Tio
in reply to Rokosun • •Rokosun likes this.
Bill Statler
in reply to Rokosun • • •I haven't tried to moderate a Fediverse instance (yet). I spent about a decade as an admin/moderator of an online forum, and some of the issues are the same.
For forum admins, there are two or sometimes three goals. One, all admins want to avoid legal problems, so they will try to block illegal or borderline-legal content. Two, all admins want to block spam before it becomes 99.9% of the posts and drives away all the desirable users. And three, some admins want to maintain a particular environment for their users, e.g. by limiting discussion to specific topics, or by enforcing a "politeness" policy.
All of this carries over to Fediverse instances, and I don't see any problem with performing these sorts of moderation actions, or sharing lists of illegal/spam instances. And if I wanted to set up a private "walled garden" Hubzilla instance for members of the University of Southern North Dakota at Hoople Alumni Association, I would say "Hey Alumni, you can also use this instance to reach the rest of the Fediverse, but only on my terms because that's not the primary
... show moreI haven't tried to moderate a Fediverse instance (yet). I spent about a decade as an admin/moderator of an online forum, and some of the issues are the same.
For forum admins, there are two or sometimes three goals. One, all admins want to avoid legal problems, so they will try to block illegal or borderline-legal content. Two, all admins want to block spam before it becomes 99.9% of the posts and drives away all the desirable users. And three, some admins want to maintain a particular environment for their users, e.g. by limiting discussion to specific topics, or by enforcing a "politeness" policy.
All of this carries over to Fediverse instances, and I don't see any problem with performing these sorts of moderation actions, or sharing lists of illegal/spam instances. And if I wanted to set up a private "walled garden" Hubzilla instance for members of the University of Southern North Dakota at Hoople Alumni Association, I would say "Hey Alumni, you can also use this instance to reach the rest of the Fediverse, but only on my terms because that's not the primary purpose of this instance."
The big difference arises when an instance's purpose is mostly to provide access to the wider Fediverse. And I think, in this situation, admins need to give serious thought to what service they want to offer to their users, because you can't offer a walled garden and call it the Fediverse.
I don't think it's ever a good idea to use instance-blocking to influence admins of other instances. An admin should not be thinking of user connectivity as something the admin "owns" and is free to use as a weapon. Anyway, it's not going to have any impact unless you're the admin of a humongous instance.
As for the users, I think there's a lot in common between non-federated forums and the Fediverse.
One, users are lazy. And I mean this in the nicest possible way. Users are there to use the service, to communicate with friends and share information and cat pictures. Only the nerdiest users want to spend any time fiddling with settings and personal blocklists. I mean, right now I'm using Zap (an offspring of Hubzilla), and I could set specific keywords for each one of my contacts that would either block or display their posts for me. If I get tired of Mike Macgirvin's wallaby photos, I can block them specifically, and still see wallaby photos from other people. Am I actually going to use this feature? Well, yeah, I would use it because I'm a nerd, but the average Fediverse user does not want to spend time doing this. They want to outsource some portion of their content moderation to somebody else. But none of them will agree on how much oursourcing they want.
The other big issue with users is... Well, if you've been on any social media for more than a day, you know the basic rule for getting along. Try to not give offense, and try to not take offense. But quite a few users want to evade responsibility for one or the other. We all know the extreme cases: the guy who refuses to be "politically correct" and calls you a "snowflake" if you don't like it, and the other guy who points out every "microagression" with righteous superiority. I think this is the character trait that underlies the desire for less moderation vs more moderation.
So this basically means that the users want self-contadictory things, and no single solution is possible. Which is exactly why the Fediverse excels! Because we can have thousands of competing solutions, and users can pick one that suits them best.
I do wish, though, that all Fediverse software implemented the "nomadic identity" concept used in Hubzilla and its descendents Zap, Streams, et al. This goes a long way towards solving the problem of "dammit I picked the wrong instance". You just migrate your identity to a better instance, and it's completely transparent to all of your contacts -- they don't even need to know that you've moved.
Rokosun likes this.
Tio
in reply to Bill Statler • •The major difference with forums is that forums were contained single places. My forum was only mine and that's all. It did not connect to any other forum. The Fediverse is entirely different since it connects each "forum" with each other to create a large network. That's the purpose of it. And that's why moderation in this context is an important topic to discuss.
So yes, moderate your own "forum" but do not break the connection with the other ones, that's what we propose.
... show moreI agree that's why the best is to have privacy-oriented defaults. As far as I can tell, and as I mentioned above too, on Friendica no one can contact you unless you get friends with them. You can't just see content into the wild without your approval. And like on fb and twitter and the like, eventually you will c
The major difference with forums is that forums were contained single places. My forum was only mine and that's all. It did not connect to any other forum. The Fediverse is entirely different since it connects each "forum" with each other to create a large network. That's the purpose of it. And that's why moderation in this context is an important topic to discuss.
So yes, moderate your own "forum" but do not break the connection with the other ones, that's what we propose.
I agree that's why the best is to have privacy-oriented defaults. As far as I can tell, and as I mentioned above too, on Friendica no one can contact you unless you get friends with them. You can't just see content into the wild without your approval. And like on fb and twitter and the like, eventually you will come across things you don0t like...that's why you can easily block these things and delete them.
I think for most users this is wildly confusing and something they won't put up with. I can, but most won't probably. And why create such a waste of duplicated accounts when there is no need for it. Simply stop blocking instances at admin level. Let's find better solutions like I exemplified in a comment above.
like this
Bill Statler and Rokosun like this.
Rokosun
in reply to Tio • •@Tio @Bill Statler
I feel like this is an important point, so I just edited my post to add it.
Yes, this is why I asked people if they'll be willing to switch instances for me. While its technically possible to do so, I don't think most users will do that for a single person. I do agree that things like "Nomadic identities" and "Channel cloning" are nice to have, they can be useful for many things. But maybe we should also focus a bit more on reducing the chance of a user saying "dammit I picked the wrong instance" 😄
Bill Statler likes this.
Rokosun
in reply to Bill Statler • •@Bill Statler @Tio
Yes, this is why I was very uncomfortable with that idea, you explained it very well.
Bill Statler likes this.
Liwott
in reply to Rokosun • • •@Rokosun
Types of moderation issues
A third type is unfair moderation, when different accounts are treated differently by moderation. For example, lemmy.ml 's admins have recently been accused of couple of times to moderate differently people with different political opinions
What content should get blocked?
I think the answer here should be simpler : an instance has rules, and should ban (local or remote) content that does not obey them. In the absence of refined moderation tools, an instance could block any instance that does not have the same rules.
I have been thinking about a concept of tiered rulesets. For example an instance could allow the (non CW'd) diffusion of naked nipples, but could require the user to tick a specific box when writing a post that contains them so that it does not get sent to federated instances who forbid them.
Note that an instance's rule do not only depend on users' preferences and social norms
... show more@Rokosun
Types of moderation issues
A third type is unfair moderation, when different accounts are treated differently by moderation. For example, lemmy.ml 's admins have recently been accused of couple of times to moderate differently people with different political opinions
What content should get blocked?
I think the answer here should be simpler : an instance has rules, and should ban (local or remote) content that does not obey them. In the absence of refined moderation tools, an instance could block any instance that does not have the same rules.
I have been thinking about a concept of tiered rulesets. For example an instance could allow the (non CW'd) diffusion of naked nipples, but could require the user to tick a specific box when writing a post that contains them so that it does not get sent to federated instances who forbid them.
Note that an instance's rule do not only depend on users' preferences and social norms, but also on law and moral. For exemple, an admin may not want to contribute to any sharing of child porn. For the latter case, hard block remains necessary.
Social norms and social structure
When I see the symmetry of the social structure's description (the conclusions seem similar for sexists and feminists), I fail to understand what was the point of doing that "feminism in the past" detour.
Rokosun
in reply to Liwott • •@Liwott
... show moreWhen users don't agree with the moderation decisions made by their moderators, that's when these users accuse it as "Unfair moderation". For example, if I was a moderator and I started banning all religious content, then some users will be very happy with it and thank me for removing all the "religious propaganda" and other harmful ideas, but then other users will be pissed at me for doing the same and accuse me of "Unfair moderation". This is the same thing that happened in my example of that big instance blocking the feminist one, some users were happy but others were pissed. Actually, the entire point of this post was to make people understand how subjective these things are, different people want
@Liwott
When users don't agree with the moderation decisions made by their moderators, that's when these users accuse it as "Unfair moderation". For example, if I was a moderator and I started banning all religious content, then some users will be very happy with it and thank me for removing all the "religious propaganda" and other harmful ideas, but then other users will be pissed at me for doing the same and accuse me of "Unfair moderation". This is the same thing that happened in my example of that big instance blocking the feminist one, some users were happy but others were pissed. Actually, the entire point of this post was to make people understand how subjective these things are, different people want to block different types of stuff than others. Now here the only reason this problem happened is because those Lemmy instance admins were blocking content on behalf of all users, including ones who didn't want to block out people with different political opinions. If users could choose for themselves what content they wanna block then this problem wouldn't exist.
What difference would writing down these rules make? If that big instance changed their rules to mention that feminist instances would get blocked before actually blocking them, then the impact is the same. All that the rules does is that it adds some transparency maybe. In the previous post where we discussed some of the technical ways to improve moderation, there I discussed about transparency and suggested to notify users when an instance is getting blocked and also show them the reason for its blocking.
I feel like this would complicate things, and its bad UX when a user has to go through a checklist to see which all applies to the post they're writing. Note that nudity isn't the only thing people get offended for, so the list would be very long. Just to be clear, to me its not a big issue if an instance admin want to hide some type of content from their local/federated timeline, its their instance so it sounds fair to me. The issue is when they outright block users, so if my admin blocks ToplessTopics then I as a user won't be able to follow that account, this looks more like censorship to me because I explicitly chose to follow a person that's not even on your instance and then you're stopping me from doing so.
I just wanted to point out that law and moral is dictated by the social norms of the time. For example, it was illegal for woman to vote, gay marriage was and is still illegal in many countries. Also this is a post I saw last day which talks about how slavery and other oppressive systems were legal at the time - mastodon.social/@197winstonsmi…
If I'm an instance admin and a user on my instance accessed illegal content from other instances that I have nothing to do with, then am I contributing to sharing that content? I'm not a lawyer or anything, so I don't know how it'd look from a legal perspective. When you visit content from other instances then your server is simply acting as a proxy for viewing that content right? Proxies have existed long before fedi, so it'd be interesting to see how they're handled legally. And I don't know if your instance will cache the remote content that's being fetched, then that might complicate things. But then again, web crawlers have the same issue right? So if Google's web crawler fetched and cached a website that's sharing illegal content, then is Google liable for saving and sharing of illegal content?
Also, I kinda feel like maybe this whole issue could be solved from a technical perspective, if caching is the problem then disable caching, if proxing is the issue then it'd be nice if users could just directly fetch the remote content instead of going through their instance. There have been some issues in the fediverse that causes remote instances to show incorrect number of likes/reshares on a post, and sometimes even some of the posts won't be federated properly, so the same profile might appear different when viewing from different instances. So the proxy/caching system has other usability issues with it too, it would be nice if they could fix these.
I did that to make people understand how subjective these things are, which was the whole point of that post, I didn't wanna go into the technical stuff that much and just explain the social impacts of moderation. If I was born in a sexist society then I would've lived my entire life being a sexist piece of shit without even realizing why or how that was harmful. Unless maybe if I got exposed to feminist ideas and it changed my mind, then I'll be one of the few people at the time that supported feminism. If you look at the history of human civilization then you'll see that sexism is only one of many things we've been wrong about, our sense of "right and wrong" have changed a lot during the years. I think @Tio has even written a book about morality and ethics, you can read that from tromsite.com/books
I haven't read that particular book, but I know it talks about the same things. And this applies even today, there might be some ideas in our society that we'll realize later that it was harmful. This idea might make people uneasy, but its just the truth. There might be some things we do today that future civilizations might look at with disgust and anger/fear, just like how we look at ideas like slavery/sexism today. So today I have my own sense of morality, my own method of distinguishing between right and wrong, but I do acknowledge the fact that I'm probably wrong in many of the things I believe today. If I was made a moderator for say a 100 people then that would make me very uncomfortable, because I don't want to choose for these 100 people what they should see or not see, that would be like me imposing my own sense of morality on others. As an instance admin I'll have the power to kick people out of my own instance, and maybe also hide content from the local/federated timeline of my instance, but choosing which accounts my users should be able to follow from other servers, that's too much power I think.
And the conclusions were similar for sexists and feminists because I wanted to showcase how all 4 of these groups think they are right from their own sense of morality - Type 1 Feminist users, Type 2 Feminist users, Type 1 Sexist users, Type 2 Sexist users. And there's just simply no way for us to know which one of them are the "correct" one we should listen to, what usually ends up happening is that moderators will go along with what they think is right and impose that choice on all their users. In a sexist society its fair to say that pretty much all feminist instances would be blocked out of fedi, because when we look at the overall network then social norms have a big impact on it.
Tio likes this.
Tio
in reply to Rokosun • •Nitter, Invidous and the like, cannot be blamed for fetching remote content from twittter or Youtube. So if someone posts child porn, or god forbid GUNS on twitter or youtube, and others access it via our nitter/invidious instances, it is not even legal in our stupid world of stupid laws, to accuse me. It is simply proxying and the content from youtube/twitter is not even on my server. Perhaps federated networks should do that if possible. It is already doing something similar in the sense that I, as admin, can choose to remove remote cached content daily or immediately. You can still connect to others, but the content needs to be fetched every time you "poke" at them or something like that...
This is the saner way of doing it. Not the butcher way please! Cut and slice! But the librarian way. Sit down and come up with a solution that is future-proof and does not impede people's ability to connect with others.
Rokosun likes this.
Liwott
in reply to Rokosun • • •(mind if I don't reply all at once? it may become quite long otherwise, and some subtopics make sense to be split)
What I meant with unfair moderation is when the same content is treated differently according to who wrote it. For example, see this comment on a Lemmy drama. The first user seemed to benefit from a laxist treatment because they shared the admins' political opinion, hence the accusations of unfair moderation. (for the record, that first person ended up getting a 5-days ban) Please, anyone reading this, let us not turn this about a discussion of whether or not there was a double standard, I only meant to say that this kind of double standards may exist, and not every dispute on moderation concerns the two types mentioned in the OP
Rokosun likes this.
Rokosun
in reply to Liwott • •@Liwott
Okay I understand what you mean now. So your point was that moderators could treat people differently based on their political believes, religion, race, etc. This might happen sometimes, as moderators are humans too, and well, humans can be biased. For example, an admin might be more reluctant to block an instance if they have close friends on it compared to other instances where they don't know anyone (so they block it without much thought). I think this could be seen as one more reason why users choosing for themselves is better than letting admins handle everything.
Liwott likes this.
Liwott
in reply to Rokosun • • •@Rokosun
They are often related to them, but not always dictated by them.
In the case of law, it is independent from the people's social standard if the people are not participating to the decision, like in a dictatorship.
Even under a democratic regime, it may not change as fast as the social standards.
As for ethics and morality, deontology is pretty much dictated by the social norms, but I do not think it is that clear for other forms of ethics. One may be utilitarian and act in a way that minimises suffering,
independently of what is socially acceptable.
By adding other layers of subjectivity, this goes towards confirming your point that those things are subjective.
... show moreNote that this is not a point that is specific to federation, and also concerns the instance's own terms of service.
Why is why...
@Rokosun
They are often related to them, but not always dictated by them.
In the case of law, it is independent from the people's social standard if the people are not participating to the decision, like in a dictatorship.
Even under a democratic regime, it may not change as fast as the social standards.
As for ethics and morality, deontology is pretty much dictated by the social norms, but I do not think it is that clear for other forms of ethics. One may be utilitarian and act in a way that minimises suffering,
independently of what is socially acceptable.
By adding other layers of subjectivity, this goes towards confirming your point that those things are subjective.
Note that this is not a point that is specific to federation, and also concerns the instance's own terms of service.
Why is why...
I was referring to the instance's terms of service rather than a new set of rules that would be written for the occasion.
So the admin would moderate remote content in the same way as the local one.
But the admins of one instance are not enough to moderate the whole fediverse.
This is why they rely on other admins to do their job.
And if that job is not compatible with their own rules, and they don't have the manpower needed to do it in their stead, then they may chose to defederate.
There is also the fact that your instance may have made you discover that content, through a federated timeline, a reshare. See this related subthread.
Oh, me neither !
But independently of the legal status, we can discuss the morality of it.
I took the extreme example of child porn because I think most people would not want to facilitate its diffusion.
So if another instance is laxist in banning that content, federating with them may contribute to distribute it.
So I may want to block that instance to provide an obstacle to that diffusion.
In that case, what I want is really hard blocking.
To conclude this part, I agree that there is subjectivity in choosing the rules, but people who set up an instance have the right to subjectively chose those rules, as they don't have any legal or moral obligation to provide an inconditional service to anyone interested.
Rokosun likes this.
Rokosun
in reply to Liwott • •@Liwott
... show moreYeah, in a dictatorship the dictators will force their laws upon everyone even if people don't agree with them. And yeah, in a democratic regime usually the social standards change first before it starts affecting the laws, lots of people had to fight for their rights in the past, and there still are many people who do that. In a way, this is what ToplessTopics is trying to do, if non-sexual nudity becomes normalized only then we could expect the laws to change, currently you could get arrested for not wearing clothes in public.
@Liwott
Yeah, in a dictatorship the dictators will force their laws upon everyone even if people don't agree with them. And yeah, in a democratic regime usually the social standards change first before it starts affecting the laws, lots of people had to fight for their rights in the past, and there still are many people who do that. In a way, this is what ToplessTopics is trying to do, if non-sexual nudity becomes normalized only then we could expect the laws to change, currently you could get arrested for not wearing clothes in public.
I know that some people might have a different sense of morality than others, but in general its fair to say that most of us are influenced by the society we grew up in. So if we look at the society in general instead of looking at individuals, then social norms have a big impact on it.
I think its unreasonable for us to expect an instance admin to moderate the whole fediverse, they might be able to moderate their own instance because they have full control over it, but other instances are out of their scope, and content shared on other instances are not the responsibility of this admin. This will become more clear later.
Admins could hide content from the federated timeline so that they're not discoverable, this way only the people who choose to explicitly follow someone from that instance will see their content. And if you find the content through a boost/reshare then that'll only happen if you follow someone who boosts such content. And if you did follow someone like that then you would unfollow that person when you see their boosts. IMO its worse if you're not able to see their boosts, if someone you follow is boosting illegal content from other instances then you should know that, right? If you don't see these boosts then you'll keep following that person thinking that everything is fine. Also, what if that person just shares a link to any illegal content instead of boosting fediverse posts? This is a problem that exists for the entire internet, and there's no easy way to fix this.
I agree CSAM (child sexual abuse material) is an extreme example, its fair to say that almost everyone would agree to blocking such content, so how do we deal with it? So remember when @Tio said before that fediverse is kinda like a browser? When bad content is hidden from the local/federated timelines then it really is like a browser because users will have to go and manually find these content or else they won't reach it, or maybe they find it because someone they follow linked to it which is similar to you finding a website on the internet through links from other websites you follow. So this is where fediverse is a bit different from other centralized platforms, if you find CSAM in Facebook for example then you can just report that content and it gets removed from the platform itself, but on fedi if you do the same then 2 things can happen - if the content is on your own instance then your admin could remove it just like Facebook did, but if its from other instance then its up to the admin of that instance. If the other admin chooses to remove that content then its fine, but if they don't then it gets a bit tricky. So the only thing your admin can do at this point is to defederate from that instance, but note that the content is still there, your admin don't have the power to remove it, so anyone can still link to that content and it'll still be there, this is not the case with Facebook because once they remove it then its gone. So this is very similar to your browser blocking you from viewing bad content that's still there on the internet, but I'm pretty sure that people will get pissed if Mozilla firefox or Google chrome just forced this upon their users. If you use an ad-blocker like uBlock Origin then it'll actually prevent you from viewing malware sites and such, but this is okay because you have full control over which blocklists to enable and such, these are not forced upon you. Now I use tblock.codeberg.page for blocking ads, malware, etc system wide. For example here is a blocklist you can use - github.com/StevenBlack/hosts
You can see how they have different categories like adware, malware, fake news, gambling, porn, etc and users are free to choose whatever blocklist they want. Having choices like this would account for the subjective views of users, here they get to choose what content they wanna block.
So now when fediverse admins block content then its just like a local block, like how I can locally apply blocklists to block malware and such on the internet, except that its forced upon all users without even giving them a choice. And the bad content is still there on the other server and anyone can view it using a web browser, its not even removed like in Facebook. The reason I suggested the idea of blocklists before is to make this whole thing more efficient, transparent and easy to manage. If users could modify which all blocklists they wanna enable like in uBlock Origin, then it'd also solve the issues related to bad moderation, they can choose for themselves what type of content to block, how strict the blocking should be, etc.
People who set up the instance have the right to subjectively choose the rules for their own instance, the issue is when they try to control what happens with other servers which are basically like separate websites on the internet out of their control. I feel like most of the issues could be solved by simply hiding the bad content from the local/federated timeline of that instance. And I've heard that the official mastodon app don't even have the local/federated timelines on it, they use the newly added explore page for finding users and posts. And now to the other issue you mentioned earlier - its not just you finding bad actors, but bad actors can find you and attack you. For solving this issue blocklists could be very useful, a well maintained blocklist is much better than relying on a single admin to do all the moderation for you, and you could apply multiple blocklists too. And another solution is to make your posts private by default so that bad actors can't find you, choosing who can see your posts is much more powerful than relying on moderators/blocklits to protect you.
Tio likes this.
Liwott
in reply to Rokosun • • •@Rokosun
A link to illegal content may be forbidden by the instance's rules. Or required to be appropriately CW'd. Anyway when we say "bad content" in the context of moderation, it is not wrt a kind of universal moral standard, it is wrt the instance's rules. If the rules happen to not be clear, this is not a moderation tool issue.
... show moreWhen I register on a website, be it a fediverse instance, with some rules, I expect the content that I can find on that website to obey the rules in question.
@Rokosun
A link to illegal content may be forbidden by the instance's rules. Or required to be appropriately CW'd. Anyway when we say "bad content" in the context of moderation, it is not wrt a kind of universal moral standard, it is wrt the instance's rules. If the rules happen to not be clear, this is not a moderation tool issue.
When I register on a website, be it a fediverse instance, with some rules, I expect the content that I can find on that website to obey the rules in question.
Or if someone from this instance replies to such content and you see it on the local timeline. (At least soft-)Blocking is useful in that case.
Even about people you follow. If you register on an instance with some rules and start following people there, you expect their content to correspond to the instance's rules.
Good point. Maybe rather than (or in addition to) (hard of soft) blocklists, it would be appropriate to have CW-lists. The sae way Friendica allows you to CW text containing a given string, the admin could enable automatic CW for posts coming from certain instances.
For that matter, one can also post links on Facebook, and this goes back to the question of whether links to bad content is itself considered bad content by the rules. I think on Facebook it is, right?
A linked content is anyway less accessible that content directly on the website, as users don't necessarily click on every link they see. So one does limit the diffusion by modding out the remote post.
A browser in a piece of software that runs locally. You do not use Mozilla's or Google's computing power to load the bad content that you see via the browser. The admin may not want their servers to work towards facilitating to diffusion of bad content.
Federating with some content makes it accesible from your website, so it is really about moderating the content shown to your website users.
Rokosun
in reply to Liwott • •@Liwott
... show moreI don't think you understood the point I was trying to make. I wasn't talking about instance rules here, this was in response to you saying that users could get exposed to bad content via boosts from people they follow, and your point was that if the admin had blocked the instance where these posts are originating from then these boosts won't show up. So I was trying to point out that you can't fix this issue by defederating, maybe it'd hide the boosts but people could still post links instead of boosts, so there just isn't an easy solution for this. Whether linking to bad content is forbidden or not will depend on the instance rules of that particular user, but in your example the same user was boosting bad content instead of linking to it, so I assumed that if their instance allowed boosting then it'd allow linking too.
@Liwott
I don't think you understood the point I was trying to make. I wasn't talking about instance rules here, this was in response to you saying that users could get exposed to bad content via boosts from people they follow, and your point was that if the admin had blocked the instance where these posts are originating from then these boosts won't show up. So I was trying to point out that you can't fix this issue by defederating, maybe it'd hide the boosts but people could still post links instead of boosts, so there just isn't an easy solution for this. Whether linking to bad content is forbidden or not will depend on the instance rules of that particular user, but in your example the same user was boosting bad content instead of linking to it, so I assumed that if their instance allowed boosting then it'd allow linking too.
Yes, you can expect the content you can find on that website to obey the same rules. But the beauty of the fediverse is that you're also able to interact with other separate websites where the rules are different from your own, and as long as you properly understand how this federation thing works then it won't be confusing for you, I know some people new to fediverse might find this concept a bit hard to grasp but given enough time they'll get it.
I think its reasonable for you to expect the content you see on the local/global timeline of your instance to respect the same rules, and also everything the users of that instance posts there. But when you get into other instances, then its really like a separate website that you're browsing, the interface and everything might look the same but the content is hosted on an entirely different website with different set of rules.
Oh I forgot that Friendica also shows you the post if someone you follow replied to it, this is not how other platforms work and even in Friendica you could change this behavior in the settings. And yes, people may find soft blocking useful, if you think about it then soft blocking is kinda like subscribing to a blocklist made by your instance admin. Soft blocking is generally better than the default way of blocking we use now, but blocklists would be a much better solution, not only because it gives users more choice and freedom, but also it makes the job of moderators much easier because they can just collaborate on making a really good blocklist.
If you only follow people from your own instance then yes you can expect their content to correspond to the instance's rules. But when you follow someone from a different instance then they'd be following the rules of their own instance not yours. Again, remember that these instances are separate websites independent from one another, just because you followed someone through your instance doesn't mean its the same website, you could be interacting with people from other instances which are wildly different from your own, sometimes they can even be entirely different platforms.
In general though, if you're finding users by browsing through the local/global timeline which is curated by the admin to hide content which are against their rules, then you probably won't be exposed to any content which differ from their rules. For example, if nudity is against the rules of an instance then you probably won't find users like ToplessTopics through their timeline. Your only way of finding ToplessTopics would be if someone else told you about it and you decide to follow them. Maybe a person you follow boosts posts from ToplessTopics and talks about them very often, then you could get exposed to it that way. This is similar to how we form relationships in real life, if you have a friend who's friends with someone else you don't know, then both of you could meet and find each other.
This feature is called content filters in Friendica. And they also have an Advanced content filter which is way more powerful, it has like an entire syntax and everything for making rules based on that, so for example if you want to filter posts containing more than 3 images then you could write a filter rule that does that. So yeah, that's very useful. You might actually be able to use the advanced content filter to write a rule that filters out all posts from an instance. I don't know if admins should be allowed to set the default filter rules for their users, but maybe it'd be helpful in some cases. As long as the user is able to change these defaults for themselves then its not a big issue if there are some predefined filter rules. In my previous post I explained that its good to have different levels of blocking, so if an admin could just write up a simple filter rule for their users instead of outright blocking an instance then that's much better.
On Facebook I think they'll remove your post if you link to bad content. But on fedi this is complicated because there are multiple instances, so if the content is from your own instance then your admin will be able to remove it and its gone, just like how Facebook does it, but if the content is from other instances then your admin don't have the power to remove that content, maybe they can block it for users on this specific instance but the content will still be there.
That makes sense, it might reduce the impact a little bit, but its not very effective. People also have things like link previews enabled by default, so there is not that much of a difference between a boost and a link now, and on Friendica they also have this quote-share thing right?
The point I was trying to make here was that in Facebook when they block/ban a content then they're banning it from their own servers they have full control over, that's why they're able to remove them. But on fedi when admins block content from other instances that content is not stored on their servers and they're not able to remove them.
Considering the way fediverse works currently, maybe a better analogy is to use a browser with a proxy. So when you're using a proxy/VPN to view bad content then you're using the computing power of their servers to load the bad content. But if these proxies or VPNs starts banning content they don't like then users will get upset, in fact many people actually make use of proxies/VPNs/Tor to bypass internet censorship. I know that many people living in countries like China and Russia face a lot of censorship and they have to use tools like Tor to get past this censorship. And when you ask the internet providers of these countries why they're censoring the internet then they give you a similar answer - "Our company don't want our infrastructure to work towards facilitating the diffusion of bad content". This is why I focused mainly on this idea of "bad content" and how subjective this notion is. There are countries today that ban LGBTQ+ websites because according to them that's "bad content". Now even if we assume that 90% of the people on that country agree to blocking these websites, this will never justify banning it for everyone including that 10% who did not want these websites to get blocked.
You can moderate the content that's on your website, but if you're trying to control what the users can access from other websites then that's closer to censorship. I'll explain the difference between these. Let's take the example of China, if I wanted to go to China and they don't wanna allow me inside their country then that's not seen as censorship, its their country so they can choose who they wanna let in. In a similar way an instance admin can control who can join their instance and what they can post there. So China not letting people in is not the issue, but when they do let people in then they control what part of the internet people can access from inside their country, this is what people call censorship. And I find this similar to when an admin tries to control what their users can access from other instance. If they're moderating their own instance then it makes sense to me, but if they're trying to block their users from accessing content from other instances then that can lead to censorship. Imagine the same thing happening with other federated services like email, you're sending an email to your friend who's on a different server and then Gmail blocks you by saying you can't send emails to this server. By doing this Gmail is breaking interoperability between email servers and this is a threat to decentralized services.
Liwott likes this.
Rokosun
Unknown parent • •To be clear, I never support things like name calling or racial slurs. If you read my above post you'll understand that my main point was about the subjectiveness of what we consider as offensive content or not and how this notion changes over time. I'm also not sure what you mean by "Democratic Government Ideology". Its kinda funny when you see almost every government on earth call themselves a "democracy", even the most oppressive governments call themselves that 🤦♂️
So this word "democracy" became a simple label and it has lost its true meaning, like the many other ideas which ended up the same. Communism for example was a good idea in its origins, but then later people used the same idea to oppress others and kill people, countries like China and Russia still call themselves a communist country despite having nothing to do with the original idea of communism.
So I'll never say things like "Any talk in opposition to
<whatever ideology>
should be blocked" even if I agree with the ideology itself, because ideologies li... show moreTo be clear, I never support things like name calling or racial slurs. If you read my above post you'll understand that my main point was about the subjectiveness of what we consider as offensive content or not and how this notion changes over time. I'm also not sure what you mean by "Democratic Government Ideology". Its kinda funny when you see almost every government on earth call themselves a "democracy", even the most oppressive governments call themselves that 🤦♂️
So this word "democracy" became a simple label and it has lost its true meaning, like the many other ideas which ended up the same. Communism for example was a good idea in its origins, but then later people used the same idea to oppress others and kill people, countries like China and Russia still call themselves a communist country despite having nothing to do with the original idea of communism.
So I'll never say things like "Any talk in opposition to
<whatever ideology>
should be blocked" even if I agree with the ideology itself, because ideologies like these can be made into a simple label and used for the oppression of others. This is a simple rule to follow: Look at what people do, not what they say. It doesn't matter if they call themselves "communists" or "supporters of democracy" or whatever, but if they're doing harmful things then they should be stopped regardless of whatever ideology they believe in.@Liwott