cross-posted from: https://discuss.online/post/5772572
The current state of moderation across various online communities, especially on platforms like Reddit, has been a topic of much debate and dissatisfaction. Users have voiced concerns over issues such as moderator rudeness, abuse, bias, and a failure to adhere to their own guidelines. Moreover, many communities suffer from a lack of active moderation, as moderators often disengage due to the overwhelming demands of what essentially amounts to an unpaid, full-time job. This has led to a reliance on automated moderation tools and restrictions on user actions, which can stifle community engagement and growth.
In light of these challenges, it’s time to explore alternative models of community moderation that can distribute responsibilities more equitably among users, reduce moderator burnout, and improve overall community health. One promising approach is the implementation of a trust level system, similar to that used by Discourse. Such a system rewards users for positive contributions and active participation by gradually increasing their privileges and responsibilities within the community. This not only incentivizes constructive behavior but also allows for a more organic and scalable form of moderation.
Key features of a trust level system include:
- Sandboxing New Users: Initially limiting the actions new users can take to prevent accidental harm to themselves or the community.
- Gradual Privilege Escalation: Allowing users to earn more rights over time, such as the ability to post pictures, edit wikis, or moderate discussions, based on their contributions and behavior.
- Federated Reputation: Considering the integration of federated reputation systems, where users can carry over their trust levels from one community to another, encouraging cross-community engagement and trust.
Implementing a trust level system could significantly alleviate the current strains on moderators and create a more welcoming and self-sustaining community environment. It encourages users to be more active and responsible members of their communities, knowing that their efforts will be recognized and rewarded. Moreover, it reduces the reliance on a small group of moderators, distributing moderation tasks across a wider base of engaged and trusted users.
For communities within the Fediverse, adopting a trust level system could mark a significant step forward in how we think about and manage online interactions. It offers a path toward more democratic and self-regulating communities, where moderation is not a burden shouldered by the few but a shared responsibility of the many.
As we continue to navigate the complexities of online community management, it’s clear that innovative approaches like trust level systems could hold the key to creating more inclusive, respectful, and engaging spaces for everyone.
Related
Isn’t this just Karma with extra steps?
Karma promotes shitposting, memes and such, I’ve yet to see that kind of content on Discourse.
Trust lvls themselves are just Karma plus login/read tracking aka extra steps.
As for the lack of shitposting I think this is more down to the fact that most of Discourse’s customers are corporation who have paid mods that manage their message boards and shutdown post/conversations that are off brand or topic immediately.
If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind. The Fediverse is practically allergic to corporations and exporting that lvl of control wouldn’t sit well with the community at large let alone most of the Admins (nevermind the cost).
A Karma system may or may not be a good addition to the Fediverse but Discourse certainly wouldn’t.
Trust lvls themselves are just Karma plus login/read tracking aka extra steps.
Trust Levels are acquired by reading posts and spending time on the platform, instead of receiving votes for posting. Therefore, it wouldn’t lead to low-quality content unless you choose to implement it that way.
The Karma system is used more as a bragging right than to give any sort of moderation privilege to users.
But in essence is similar, you get useless points with one and moderation privileges with the other.
If you are actually advocating that the Fediverse use Discourse’s service you have to be out of your mind.
You are making things up just so you can call me crazy. I’m not advocating anything of the sort.
So your solution is make Lemmy like reddit ya fuck that.
The first thing people do with free speech once they realize people have it is limit it. Mods will always come out of the wood works to ban bots its nbd.
Going to add that it seems like you could easily run an LLM bot to just do a yes no on whether a post looks like a bot.
I dunno… The idea might work for some instances perhaps. But this is kind of what StackOverflow has done for a long time and that site isn’t exactly known for being super welcoming to new users.
It would also make it really inconvenient to create new accounts (for example when moving instances).
Stack Overflow is a great example of over-moderation that harms the availability of useful information. They’re a great case study of what not to do.
People keep mentioning StackOverflow even though I specifically mention Discourse. The two do similar things but one does it right and the other doesn’t. I don’t really understand how it would be inconvenient to create accounts. If you are active and behave you get moderation privileges otherwise you get the same experience as you do now.
Well if you want to make a new account, for whatever reason, your whole point score resets. Or if you want to move instances, it also resets. That seems inconvenient.
I also don’t really think this necessarily needs to have anything to do with the Fediverse, in the sense that ActivityPub doesn’t need to support this, I think. An AP server can provide this functionality and use the existing protocol I think.
What I’m saying is, this feature could be added as an option to existing Fediverse software (like Lemmy or Mastodon or whatever) without having all other Fediverse software adjust their use of the protocol. So perhaps it could be experiemented with and we could see how it goes. Unless I’m misunderstanding the proposal somehow.
Yeah, this seems to favor people who stick to one account, but I also enjoy seeing some of the regular posters here. Even though I like creating new accounts, I wouldn’t mind if they were given moderation privileges to share the workload. I’m unsure about the implementation details, so I can’t comment on the protocol. What I do know is that Reddit moderation sucks, while Discourse moderation rocks.
I love the idea of a centralized authority system for a de-centralized medium. It’s like the heel on Achilles.
No I don’t want this to turn into reddit. God no.
This is just centralizing the decentralized fediverse. The one advantage this has to begin with.
This isn’t really hard to solve: proof of work.
I make a good effort and help the community and my reward is a second job… Unpaid?
The benefit of this is that only individuals who are interested will progress up the trust level ladder. If you are indifferent, you will have the same experience as currently. I believe this benefits everyone involved.