The Free Software Foundation Europe deleted its account on X
Nature and trajectory of X/Twitter
- Several commenters argue Twitter “always” had hostility and manipulation; Musk’s tenure made that manipulation blatant, especially with an algorithm that rewards outrage-bait and blue-check clickbait over substantive posts.
- Others say their curated feeds (mostly artists/tech) remain fine and that any large social media has hate; they see Mastodon as at least as full of political hostility.
- Disagreement over whether X is uniquely bad or simply less censorious of right/anti‑establishment views than competitors.
FSFE’s decision: principles vs reach
- Supporters see leaving X as consistent with free software, privacy, and anti‑centralization values, especially given perceived increases in hate, misinformation, and profit‑driven control.
- Critics note Twitter was always proprietary and misaligned with those values; they question “why now” and see the announcement as political positioning rather than a free‑software‑driven decision.
- Some frame it as a moral boycott of Musk; others say organizations are allowed to factor staff well‑being and hostility into their calculus, not just mission reach.
Effectiveness, audience, and fragmentation
- Many argue departure reduces FSFE’s ability to reach “normal people,” who are unlikely to follow them to Mastodon or PeerTube; some think this makes the organization more insular and less relevant.
- Others counter that X’s algorithmic throttling and toxic replies meant their real reach there may already have been minimal.
- Broader discussion notes the post‑2019 fragmentation of audiences across Discord, Instagram, Bluesky, Mastodon, etc., eroding Twitter’s former centrality.
Moral responsibility, Musk, and complicity
- A strong contingent views continued X use as tacit support for Musk, described by some as racist, fascist, or dangerously manipulative; every click is framed as funding him.
- Opponents reject this as performative purity politics, insisting individuals and orgs should choose platforms pragmatically without being shamed.
Safety, hate, and moderation
- Personal reports describe slower or weaker enforcement against slurs and antisemitism post‑Musk, plus algorithmic boosting of paying users and of Musk himself.
- Others insist X is no worse than Reddit/Facebook and that “racism and hate” is sometimes used as shorthand for unpopular opinions.
- Some see a chilling trend of organizations prioritizing “misinformation” and hate‑speech policing over being ideologically open while focusing on software freedom.
Account squatting and practicalities
- Debate over whether FSFE should have left a non‑monitored placeholder account to prevent impersonation, given X’s handle re‑use policies.
- Some argue even a placeholder lends legitimacy to a platform they want to delegitimize; others think a redirect-with-explanation would have been safer for users.