On social media, Attacks on LGBTQ people aren't always as direct as calling someone a name or threatening to hurt them — cloaking attacks in coded language, jokes and dog whistles is a common method people use to try to get around rules. “There are many coded references, such as the number 41 percent – which is often used to say that a trans person should 'go 41 percent' – a reference to the number of trans people who attempt suicide,” says Alejandra Caraballo . clinical instructor at Harvard Law School and LGBTQ rights advocate. Another popular method is to intentionally name-check or mischaracterize transgender people, according to Jenni Olson, senior director of social media safety at LGBTQ advocacy nonprofit GLAAD. “We're talking about targeted misnomers and misnomers, not accidentally getting someone's pronouns wrong,” he says.
In April 2023, a Facebook user in Poland posted an image of a striped curtain in the colors of the trans pride flag: blue, pink and white. The text overlaid on the photo reads in Polish: “New technology. Hanging curtains.” Another piece of text read, “Spring cleaning <3." The user's bio stated: "I'm transphobic." Despite multiple reports and user appeals, it remained updated — until last fall, after the independent Meta Supervisory Board took up a case about the post.
Now, based on that post, the board has told the social media giant for the first time that it needs to strengthen how it protects the safety of LGBTQ people in the real world — specifically, by better implementing its existing policies on hate speech and hate speech. references to suicide. “What we're asking for is more attention to the treatment of LGBTQIA people plus platforms because, as this case has shown, there is violent and unacceptable discrimination against the community on social media,” Board of Supervisors member and professor of constitutional law. Says Kenji Yoshino Rolling rock. “Social media is a place where LGBTQIA and individuals go for safety, often when they are worried about navigating their physical space, so the idea that there would be this level of aggressive hatred towards them is all the more distressing and unacceptable ».
In a decision released Tuesday, the board wrote: “Meta is failing to live up to its stated ideals of LGBTQIA+ safety. The Board urges Meta to close these enforcement loopholes.” The board also said that Meta had repeatedly failed to crack down on attacks against the LGBTQ community that use coded references or satirical memes to get past moderators, a practice referred to as “poor creativity,” and that the company failed to enforce its policies on hate speech and references to suicide on its platforms. The board recommended that Meta strengthen its enforcement policies to stop anti-LGBTQ hate from proliferating on its platforms.
The Supervisory Board, launched in 2020 and comprised of an impressive roster of experts on human rights, free speech, government, law and ethics, has been likened to the Supreme Court or Human Rights Court of Facebook and Instagram. It is a separate entity from Meta and makes decisions about the appropriateness of moderation choices made by the social media behemoth. It has the ability to overturn Meta's decision on whether content is removed or allowed to remain online. In addition to issuing content decisions, the board also issues recommendations on how Meta can adjust its policies to balance online safety with users' freedom of expression. This case is the first time the board has issued recommendations to Meta to better protect LGBTQ people from the real world.
On Tuesday, Meta issued a statement saying it welcomed the board's decision. “The board reversed Meta's original decision to let this content go. Meta previously removed this content, so no further action will be taken on it,” the statement read in part. Meta must respond to the board's recommendations within 60 days.
“The case shines a light on Meta's failure to enforce its own policies, something we've been highlighting for years, so it's very validating and gratifying to see what's being expressed by the Board of Directors,” says GLAAD's Olson. The organization submitted a statement during the Supervisory Board public comment period for this case. “We found that there is very real harm to LGBT people from this kind of hateful behavior online,” Olson adds, citing GLAAD's annual report. Social Media Safety Index report.
“The post is a reference to the high rate of suicide attempts among trans people, so it's a joke that makes fun of trans people for suicide,” says Caraballo, who helped GLAAD draft its public comment, “which is explicitly against with the community's guidelines, but something their automated systems wouldn't be able to understand the nuance of it.'' they should have been combined to understand the intent of the post.
The curtain picture didn't just escape the automated review system. According to the council's report, although the post had just 50 reactions, 11 users reported it for breaching either hate speech or suicide and self-harm community standards — which prohibit attacks on people based on their race, gender identity or other “protected characteristics” and celebrating, promoting or encouraging suicide. Some of these reports were directed at real people, but were found not to be in violation of policies and allowed to remain open, despite multiple appeals from users who reported it. After the Supervisory Board announced it would take up the case in September 2023, Meta permanently removed the post — and banned the user for past violations.
Some users on Meta's platforms face attacks like this every day. Alok Vaid-Menon is a gender non-conforming comedian and poet with 1.3 million followers on Instagram, says he is often the target of “animus” online. “Despite the fact that this vitriol frequently violates Meta's policies, it is allowed to persist,” they say. “My classmates and I have countless stories of reporting violent threats that are never addressed. This rampant lack of accountability and platform for anti-LGBTQ extremism endangers our safety online and offline and has contributed to the escalation of anti-trans discrimination around the world.”
Caraballo agrees. “Meta has consistently failed to hold anti-LGBTQ and particularly anti-trans content as a violation of its community standards and has consistently allowed some of the worst anti-LGBTQ accounts to target, abuse, and promote horrific conspiracy theories that people LGBTQ are beautifiers.”
The Oversight Board suggested the company clarify its enforcement procedures to specify that a flag representing a protected group of people can represent people in that group — and that an image need not depict a human figure to represent those people . “The real mystery of this case is why there was such a huge gap between the policies as stated and their enforcement,” says Yoshino. “The board's view was that Meta has the right ideals, it's just living under those ideals, rather than living up to them. And what we can do, and I don't think an ordinary member of the public can do, is say to Meta, “you need to do better.” . He offered several ideas, including strengthening education specifically around gender identity issues. creating a working group on the experiences of trans and non-binary people on Meta's platforms; and assembling a panel of experts to review content affecting the LGBTQ community.
Vaid-Menon echoes the need to involve the LGBTQ community in efforts to improve moderation. “This incident should have been addressed by the Meta coordination team,” they say, referring to the curtain. “The fact that it wasn't shows the company's continued failures to enforce its own hate speech policies. It does not matter what is written when it is not actually enforced. What is needed in the future is for Meta to provide better training and guidance to its moderators on transphobia. This can best be done in collaboration with transgender and gender non-conforming creators and organizations.”
from our partners at https://www.rollingstone.com/culture/culture-features/meta-oversight-board-social-media-failing-lgbtq-1234947358/