“Nothing to see here” — Twitter ditches Trust and Safety Council as Musk tweets fuel harassment Twitter thanks the council for their service, while members fear retaliation.
Ashley Belanger – Dec 13, 2022 4:29 pm UTC EnlargeAnadolu Agency / Contributor | Anadolu reader comments 161 with 0 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit
Yesterday, Twitter safety chief Ella Irwin was supposed to meet with Twitters independent Trust and Safety Council by Zoom for an open conversation and Q&A, AP News reported. Instead, council members received an email dismissing them entirely.
Twitter has declared that it is officially in a new phase when it comes to trust and safety.
We are reevaluating how best to bring external insights into our product and policy development work, an email, simply signed “Twitter,” informed the council. As part of this process, we have decided that the Trust and Safety Council is not the best structure to do this.
Established in 2016, the council brought together dozens of experts on harassment, safety, human rights, suicide prevention, child sexual exploitation, and policy development. Today, instead of clearly outlined priorities detailing best guidance for Twitter content decisions, any visitors to the now-deleted Trust and Safety Council webpage will find an error message: Nothing to see here.
The decision comes days after three council members resigned, declaring in a letter that a Twitter ruled by diktat is not a place for us. The Washington Post reported that many members were already on the verge of resigning, with one council member, Larry Magidchief executive of ConnectSafelysuggesting that Twitter promptly acted to save face. By disbanding it, we got fired instead of quit,” Magid told The Post.
In its letter to the council, Twitter suggests that the council was bogging down decision-making. In the past, Wired reported that former Twitter CEO Jack Dorsey was criticized for ignoring the council, and during current CEO Elon Musk’s takeover, Musk reportedly did the same, preferring to make decisions via informal, seemingly bot-plagued Twitter polls rather than considered discussions with external experts. Twitter indicated in its letter that it won’t be Musk alone making decisions from now on, claiming that Twitter will be moving faster and more aggressively than ever before by fielding more timely input through bilateral or small group meetings.
Irwin recently confirmed that Twitter would be relying more on automated content removal, and a @TwitterSafety thread posted this weekend clarified Twitters recent efforts to limit the reach of hate speech on the platform rather than remove it. For example, rather than remove tweets with slurs, the thread says, Twitter plans to put effort into identifying appropriate contexts of slurs, tracking every slur that appears on Twitter but only limiting whats deemed inappropriate. (Irwin did not respond to Ars’ tweets to clarify how this limited reach will impact Twitter Blue subscribers, who pay for expanded reach on tweets.) Advertisement
The Post interviewed several former council members, with some remaining anonymous out of fear that Musk could signal to his many followers to target them for harassment. Thats what happened when Musk replied to a tweet from resigning council members, accusing them of the crime of not taking action on child exploitation for years. This, The Post reported, unleashed a wave of threats and harassment.
One organization formerly involved with the council, the Center for Democracy and Technology, issued a statement criticizing Musk for irresponsible actions to spread misinformation about the Council, which have endangered Council members and eroded any semblance of trust in the company.
Their concerns about retaliation have been validated by other recent reports. Twitters former head of trust and safety, Yoel Roth, was in a similarly uncomfortable position when Musk tweeted to spread a rumor that Roth had conducted PhD research advocating child sexual exploitation. In his tweet, Musk was either misreading or willfully mischaracterizing an Internet safety study where Roth tracked data on risks to young users navigating Grindr. A person close to Roth told CNN yesterday that after Musk started pushing the pedophilia conspiracy theory, the threats that Roth received on Twitter escalated exponentially”forcing him and his family to flee his home.
While Musk has made it clear that he considers blocking child sexual abuse materials Twitters top priority, disbanding the council risks distancing Musk from partnerships with council members like the National Center for Missing & Exploited Children. NCMECs vice president, Gary Portnoy, told The Post that his organization will watch how Twitter restructures while they continue to encourage reporting to the CyberTipline and hope to continue to have a seat at the table to address child safety on Twitter.
Patricia Cartes, a former Twitter employee who helped put the council together in 2016, tweeted to explain the benefits that Musk may be overlooking in disbanding the council. According to Cartes, the councils global team helped Twitter capture feedback from safety experts across geographies and timezones, flagging edge cases that helped advance safety policy.
Their constructive criticism objectively made us better and fostered a safer environment on Twitter, Cartes tweeted. Without their input and a structure in place to capture their expertise, it is unclear how Twitter Safety will be kept accountable and what checks and balances will look like.
Twitter did not respond to Ars request for comment. reader comments 161 with 0 posters participating Share this story Share on Facebook Share on Twitter Share on Reddit Ashley Belanger Ashley Belanger is the senior tech policy reporter at Ars Technica, writing news and feature stories on tech policy and innovation. She is based in Chicago. Email ashley.belanger@arstechnica.com // Twitter @ashleynbelanger Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars