was just reading the new #bluesky doc on labelling - a system for adding own content moderation: it marks content to be hidden, blurred, taken down, or annotated in applications. Labels apply to accounts or posts. Crucially, labellers can be subscribed to, serving communities, not just individuals.
It seems like that will create many interesting issues: imagine, e.g., a shared labeller „NoMinorityX“ to hide all content by a particular group. Legal/illegal where, for whom, and when?
It has long been discussed that Bluesky's primary moderation process is intended to off-load any moral or legal responsibility for #Moderation to third parties and leave #Bluesky corporate in the clear
See, as one:
"Bluesky’s Game-Changing Community Moderation Proposal"
"July 12, 2023 · Best Practice, Internet Culture"
"User Lists, Replygating, and Thread Moderation
First there are the User Lists to allow individuals to create lists of accounts they trust or prefer. Enabling them to filter their feed.
Replygating then further allows users to control who can engage with or reply to their own posts. And thirdly, Bluesky’s proposal includes
Thread Moderation, which enables community members to collaboratively moderate discussions within a thread"
Here: https://blog.trustedaccounts.org/bluesky-community-moderation-proposal/