Bluesky, censorship and country-based moderation
On 19 March 2025, Istanbul Mayor Ekrem İmamoğlu was arrested by the Turkish police. İmamoğlu is the presidential primary candidate for the Republican People’s Party, which is the opposition party for the ruling AKP party. The arrest of İmamoğlu sparked widespread protests across Turkey. The Turkish government cracked down on the protests in various ways, including by applying censorship on social media. A few days after the arrest, the government ordered X to restrict access to various X accounts that are associated with the protests. X announced that it had filed an individual application with Turkey’s Constitutional Court challenging local court orders to block accounts. Bianet, an independent news agency in Istanbul, reported that X has not yet enforced the bans on the accounts.
In response to the censorship requests, users on X started to move towards Bluesky, to avoid the restrictions. The censorship requests from the Turkish government quickly extended to Bluesky, and Bianet reported on April 5th that 44 Bluesky accounts had been ordered to be blocked by the Turkish judicial system, citing concerns over national security and public order. Bianet noted that at that time, Bluesky PBC (the company behind Bluesky) had not yet blocked any of these accounts, and that they remained accessible from Turkey.
I reported last week that Bluesky PBC was in the process of setting up a Turkish moderation labeler, but that it was not active yet. Such a moderation layer allows Bluesky (the app) to apply moderation decisions that are only experienced by people currently geolocated in a specific country, more on that below. A few days ago, the Turkish moderation labeler became active, and the labeler started hiding accounts, making the accounts invisible for people in Turkey. Between April 14 and 17, Bluesky PBC made 18 accounts and 2 posts invisible in Turkey.
The decision by Bluesky PBC to adhere to the orders by the Turkish government to censor certain accounts in Turkey lead to criticism, ranging from the Turkish online community (1, 2), as well as the fediverse community. The language used is in the criticism notable, with many people describing Bluesky PCS’s action as “account takedown” and “ban”. That is not actually what happened here. Bluesky PBC hid the accounts, making them invisible in Turkey but visible outside of the country. Still, the impact is largely the same: the vast majority of people within Turkey are not able to see the accounts, and the reach and impact of these accounts is severely limited.
There is a technical story, of how Bluesky and the AT Protocol (ATProto) do composable moderation for specific countries. But this is not just an interesting technology, it has implications of government censorship more broadly. Not only is the Turkish censorship of accounts even easier to sidestep, it also allows for new ways to highlight and create visibility for the content that the Turkish government wants to be hidden. To explain how that all works, first a closer look at how moderation works on Bluesky.
How Bluesky does moderation
For those who want it, a quick refresher on how ATProto works, with this simplified explanation:
– A Personal Data Server (PDS) stores all account data in a publicly accessible database
– A relay gathers all events (posts, likes, etc) from all the PDSes in the network, and outputs a continues stream of all events that happen on the entire network.
– An AppView takes in the data from the relay, and processes the data (such as creating a Discover feed and putting all the post for the feed in the right order, and calculating the number of likes on a feed).
– Labelers apply a label to posts. This label contains information on what an app should do with this post (hide a post entirely or put a warning sign on the post)
– A client or app takes the processed data from the AppView, as well as the labelers, and displays it to the user.
For clarification on language:
– Bluesky PBC refers to the company which makes the Bluesky network and the AT Protocol
– Bluesky PBC runs the Bluesky AppView. This is a large software that does all the technical stuff so some 35 million accounts can access the network.
– Bluesky PBC operates 3 clients/apps. There is Bluesky on the web (bsky.app), as well as the app on Android and iOS. Both the AppView, the clients, as well as the entire network can be referred to as ‘Bluesky’. In this article when I refer to ‘Bluesky’ I’m referring to the entire network. If I mean the specific AppView or client this is clarified.
Bluesky PBC employs three1 main forms of moderation:
Bluesky PBC takes down a post or account, meaning removing the content/account from the PDS. This usually occurs in cases of significant rule violations with major impact. Bluesky PBC can only do this when the account in question is located on one of the PDSes owned by Bluesky PBC.
When this happens, the Bluesky moderation labeler applies a ‘takedown’ label to the account or post as well.
Bluesky PBC uses their moderation labeler to apply a label to the post or account. This moderation layer applies to everyone who uses Bluesky. This happens in three cases:
The rules violation is less significant. For example when people are being rude, Bluesky PBC can apply the ‘rude’ label to a post. Users can adjust their settings to show, warn, or hide posts with this label.
The post or account creates content that is allowed, but needs to be able to be hidden in order to comply with user preference. Most notably this is for sexual content. Sexual content and nudity are allowed on Bluesky, but Bluesky PBC applies a label such as ‘porn’ or ‘sexual’ to the content so people can filter it out if they so desire.
the account that violated Bluesky’s ToS is not located on PDS owned by Bluesky PBC. As such, Bluesky cannot take down the post or account. The moderation labeler does apply a ‘takedown’ label to the post.2
Bluesky PBC uses their geographic moderation labelers to apply a ‘hide’ label to posts or accounts. This makes the content invisible to the people who subscribe to labeler. The content is still visible to accounts who are not subscribed to the geographic moderation labeler.
Bluesky PBC uses these geographic moderation labelers to comply with local regulations. It only applies these labels if content is in violation of local laws, while it would be legal in other jurisdictions. An example of this is Germany’s Network Enforcement Act, which requires platforms to remove all illegal content within 7 days. Bluesky PBC has taken down 20 posts in Germany in 2025, and no posts before that.
Bluesky PBC currently operates geographic labelers for Germany, Brazil, Turkey, and Russia. There are also references to labelers for various other regions and countries on the network, but they are not active as of writing.
For this news about the government of Turkey requesting accounts being taken down, it is these geographic moderation labels that are relevant. These were first launched in September 2024, and. Head of Trust & Safety Aaron Rodericks explains the feature:
“In some cases, content or accounts may be allowed under Bluesky’s Community Guidelines but violate local laws in certain countries. To balance freedom of speech with legal compliance, we are introducing geography-specific labels. When we receive a valid legal request from a court or government to remove content, we may limit access to that content for users in that area. This allows Bluesky’s moderation service to maintain flexibility in creating a space for free expression, while also ensuring legal compliance so that Bluesky may continue to operate as a service in those geographies. This feature will be introduced on a country-by-country basis, and we will aim to inform users about the source of legal requests whenever legally possible.”
The implication of what Rodericks writes is that Bluesky PBC judges a government’s request to remove content on its legal validity, and not on an ethical, moral, or other framework. If it is legally valid, Bluesky PBC will comply and limit access to that content in the applicable jurisdiction.
Moderation labels are (not) optional
In ATProto, labels applied by a labeler serve as recommendations for how clients should treat content. This is a direct result of all data on ATProto being public and locked open. Clients and apps are expected to follow a labeler’s output (hiding a post from the users when they have subscribed to the labeler that gave the ‘hide’ label), however clients can potentially deviate from this if they want to.
The Bluesky apps made by Bluesky PBC make their own moderation labeler mandatory. When using the official Bluesky apps, the Bluesky’s moderation labeler gets applied automatically, with no way to opt out of this. Thus, users of the official Bluesky app cannot opt out of Bluesky PBC’s moderation decisions.
Additionally, the Bluesky apps enforce a mechanism where the client checks the user’s current IP address. When the user is in one of the regions with a geographic labeler, these labelers are also applied automatically and compulsory. When someone uses the Bluesky app while they are on a Turkish IP address, they will be automatically subscribed to the Turkish moderation labeler. Posts made by accounts that the government of Turkey has mandated to be removed are not visible. If the same account logs in from outside Turkey, it is no longer subscribed to the Turkish moderation labeler. Content that the government of Turkey has mandated to be removed is now visible.3
However, other clients can make other decisions on using the moderation labelers. Clients will (almost) always also apply the main Bluesky moderation labeler. Getting a client app into the Google and Apple App Stores requires social media apps to have moderation integrated into their apps. Clients could build their own moderation labeler and use that in their app. However, moderation for a network of 35 million accounts is difficult and expensive, and reusing the Bluesky moderation labeler is free. As a result, almost all third-party client apps use the Bluesky moderation labeler.
Things are different when it comes to the geographic moderation labelers. This step is optional for clients, and most clients do not implement support for geolocating their users and mandating the geographic moderation labelers. As a result, using a different client is a simple way to bypass the geographic content restrictions. Using Bluesky with a Turkish IP address on most4 other clients does show content that the government of Turkey has mandated to be taken down.
The impact
Government censorship of social media can best be understood as a way to minimise reach and virality, rather than a 100% effective way to prevent literally everyone from seeing the content. The practice of governments requesting content on social networks to be taken down is not new, and has happened on Big Tech platforms for a while. The Big Tech platforms usually take a similar approach, restricting access to posts and accounts only within the relevant jurisdiction. This means that VPNs allow people to sidestep these restrictions. Governments are largely fine with this, as the goal of the censorship is still reached: there are now meaningful barriers to viewing said content, and the large majority of people will likely not see it.
What makes Bluesky and ATProto different is two aspects:
Restrictions on content can now be bypassed via other clients. There is not even a need for a VPN anymore, simply using a different client is enough.
The content that governments want to censor is now easily accessible for the rest of the world. The output of the geographic moderation labelers is easily publicly accessible. Scanning services like PDSls or a tool like the Query Labeler Service give a complete and publicly accessible list of all content that is taken down according to the respective jurisdiction. For example, The Query Labeler Services tool shows that the moderation-tr.bsky.app account (the handle for the Turkish moderation labeler) has hidden 18 accounts and 2 posts, at the time of writing.
It is this new dynamic of being able to see which content governments wants to be hidden is most interesting to me. “The government does not want you to see this!!1!” is one of the most successful clickbait titles of all time, for good reason. And now we’re in a new situation where this is not necessarily clickbait anymore. Indeed, it is publicly visible which accounts and posts a government does not want people to see. Even more so, ATProto and Bluesky allow other people to build custom feeds and list, based on this information. I would not be surprised if before we will see a custom feed specifically with the accounts that the government wants to be censored. How that changes the dynamic of Bluesky and this new generation of social networks remains to be seen.
The ways that how Bluesky and ATProto handle moderation, government requests for takedown, and the ways that these government censorship demands can be sidestepped also shines some more light on the concept of decentralisation. Decentralisation, in the meaning of a technical description of a network that consists of multiple different interacting software platforms, is often lauded as a way to combat censorship. But how Bluesky and ATProto handle moderation, and the way that it can be sidestepped, show that this is not a hard requirement. Avoiding the censorship and being able to see the content that governments do not want to be seen turns out to only need open data, a compartmentalised moderation system and an open API to have third party client apps.
However, focusing on that it is technically possible to see the censored content misses that the censorship is still effective. The large majority of people will not actually use this option, if they are even aware of it. In order to build a network that does not have a single chokepoint where governments can apply pressure for censorship requests, it is that people have to be spread out over different services. How ATProto and Bluesky handle moderation show that this spread of people over different services does not have to mean multiple different platforms, having people meaningfully spread out over different clients would also be effective in this specific case. It shows that the value of a decentralised social network is less dependent on the technical capabilities, and more on the distribution of people and power. It matters less what the protocol and technical features of a network are capable of, and more to what extend people are clustered in single group, whether that is a single service, platform or client.
For now, it seems that Bluesky PBC and the Turkish government both have reached a situation that is acceptable for both parties. The Turkish government has significantly restricted the visibility of accounts they deem unwanted. Sidestepping these restrictions remains an option, with new and easier ways to do so. But considering how powerful the default apps are and how few people use other apps, this seems likely to be an acceptable tradeoff. For Bluesky PBC it seems to be an acceptable outcome as well. Not complying with the government order would risk the app to be banned in the entire country. Using geographic moderation labelers gives compliance with the government order, while at the same time minimising the impact: the accounts in question are still visible outside of the country, and people within Turkey have fairly accessible ways of sidestepping the ban.
Note: the regularly scheduled weekly Bluesky Report will be released tomorrow instead of today. This article was originally intended as part of the Bluesky Report, but I felt it was relevant enough to be released separately.
This is not exhaustive. Bluesky PBC can also ban third-party PDSes from the relay that Bluesky uses. This is intended for preventing network flooding spam and DDoS types of attacks. It is unclear if/when Bluesky PBC has actually used this option. ↩︎
Violating accounts can also already be banned on the AppView level in this case. I’m unclear which option Bluesky PBC usually uses here, and I think it is actually both, but I might be wrong on that. ↩︎
Users can also voluntarily subscribe to a geographic labeler. ↩︎
I actually have not found an example yet of a third party client which does mandate the geographic moderation labelers. ↩︎
https://fediversereport.com/bluesky-censorship-and-country-based-moderation/