Major social media platform X has become a haven for content promoting dangerous eating disorders, with minimal content moderation allowing such material to spread freely and even be algorithmically recommended to users, according to researchers and platform experts.
X's approach stands in stark contrast to other social media sites like Instagram and TikTok, which have implemented barriers to accessing eating disorder content. When users search for related terms on these platforms, they encounter warning screens directing them to mental health resources.
On X, however, searching for eating disorder-related terms leads directly to accounts, posts, and community recommendations promoting dangerous behaviors. The platform's algorithm suggests this content to users after minimal engagement with such material, despite X's official policy prohibiting the promotion of self-harm.
Rumman Chowdhury, a former safety team leader at pre-Musk Twitter, attributes this issue to substantial reductions in content moderation staff. "These teams whose full-time job it was to prevent harmful content simply are not really there," Chowdhury said, explaining that these employees were either terminated or their teams were drastically reduced following Elon Musk's acquisition.
According to the Atlantic, platform experts, say the proliferation of this content represents a broader regression in social media content moderation. While major platforms had previously increased moderation efforts in response to various events, including the 2016 presidential election and the coronavirus pandemic, many have now pulled back following criticism from politicians who equate moderation with censorship.
The scale of the problem is evident in the rapid growth of pro-eating disorder communities on X. A single group identified by researchers grew from 74,000 to more than 88,000 members in just a few weeks. These communities openly use terms like "proana" and "thinspo," alongside more extreme variations that romanticize severe health outcomes.
Kristina Lerman, a professor at the University of Southern California studying pro-anorexia rhetoric on X, describes the situation as "an echo chamber, this highly interlinked community." Her research team is finalizing a paper examining how this content circulates on the platform.
The issue presents unique moderation challenges. Unlike hate speech or harassment, eating disorder content is rarely reported by users within these communities. Content creators actively work to evade detection by developing coded language and new terminology to circumvent platform restrictions.
Vaishnavi J, a youth safety expert who previously worked at Twitter and Instagram, suggests the problem lies partly in X's recommendation algorithms, which she says "highly value engagement, and ED content is very popular." The expert requested partial anonymity due to concerns about targeted harassment.
"Despite what you might say about Musk," Vaishnavi J added. "I think if you showed him the kind of content that was being surfaced, I don't think he would actually want it on the platform."
Amanda Greene, who researched online eating disorder content at the University of Michigan, points to recommendation algorithms as a key factor in the content's spread. "It is one thing to have this stuff out there if you really, really search for it. It is another to have it be pushed on people," Green said.
The content often takes a notably harsh tone, featuring what communities call 'meanspo' or 'mean inspiration,' where users encourage harmful behaviors through deliberately cruel messages. This approach represents a return to older styles of pro-eating disorder content that had previously declined due to stricter moderation practices.
While X has occasionally taken action against prominent pro-eating disorder groups, new communities quickly emerged to replace them. NBC News reporter Kat Tenbarge documented this pattern, noting that after X removed one of its largest pro-eating disorder groups, similar communities rapidly appeared in its place.
X did not respond to requests for comment about its handling of eating disorder content, and Elon Musk did not reply to inquiries about the platform's position on this issue. The platform's published policies continue to prohibit the encouragement of self-harm, though enforcement appears limited.
The situation highlights the ongoing challenges social media platforms face in moderating content that exists in a gray area between harmful promotion and cultural commentary on body image and health. As platforms grapple with these nuanced issues, the impact of reduced moderation efforts becomes increasingly apparent in online spaces.