News

Under pressure — Report: Microsoft cut a key AI ethics team Expert calls decision “damning,” says it’s time for regulators to get involved.

Ashley Belanger – Mar 14, 2023 10:09 pm UTC EnlargeNurPhoto / Contributor | NurPhoto reader comments 70 with Share this story Share on Facebook Share on Twitter Share on Reddit

An entire team responsible for making sure that Microsofts AI products are shipped with safeguards to mitigate social harms was cut during the companys most recently layoff of 10,000 employees, Platformer reported.

Former employees said that the ethics and society team was a critical part of Microsoft’s strategy to reduce risks associated with using OpenAI technology in Microsoft products. Before it was killed off, the team developed an entire responsible innovation toolkit to help Microsoft engineers forecast what harms could be caused by AIand then to diminish those harms.

Platformers report came just before OpenAI releasedpossibly its most powerful AI model yet, GPT-4, which is already helping to power Bing search,Reuters reported.

In a statement provided to Ars, Microsoft said that it remains committed to developing AI products and experiences safely and responsibly, and does so by investing in people, processes, and partnerships that prioritize this.

Calling the work of the ethics and society team trailblazing, Microsoft said that the company had focused more over the past six years on investing in and expanding the size of its Office of Responsible AI. That office remains active, along with Microsofts other responsible AI working groups, the Aether Committee and Responsible AI Strategy in Engineering.

Emily Bender, a University of Washington expert on computational linguistics and ethical issues in natural-language processing, joined other critics tweeting to denounce Microsofts decision to dissolve the ethics and society team. Bender told Ars that, as an outsider, she thinks Microsofts decision was short-sighted.” She added, Given how difficult and important this work is, any significant cuts to the people doing the work is damning. Brief history of the ethics and society team

Microsoft began focusing on teams dedicated to exploring responsible AI back in 2017, CNBC reported.By 2020, that effort included the ethics and society team with a peak size of 30 members, Platformer noted. But as the AI race with Google heated up, Microsoft began moving the majority of the ethics and society team members into specific product teams last October. That left just seven people dedicated to implementing the ethics and society teams ambitious plans, employees told Platformer. Advertisement

It was too much work for a team that small, and Platformer reported that former team members said that Microsoft didnt always act on their recommendations, such as mitigation strategies recommended for Bing Image Creator that might stop it from copying living artists brands. (Microsoft has disputed that claim, saying that it modified the tool before launch to address the teams concerns.)

While the team was being reduced last fall, Platformer said that Microsofts corporate vice president of AI, John Montgomery, said that there was great pressure to take these most recent OpenAI models and the ones that come after them and move them into customers’ hands at a very high speed. Employees warned Montgomery of significant concerns they had about potential negative impacts of this speed-based strategy, but Montgomery insisted that the pressures remain the same.

Even as the ethics and society teams size dwindled, however, Microsoft told the team it wouldnt be eliminated. The company announced a change on March 6, though, when the remnants of the team were toldduring a Zoom meeting that it was considered business critical to dissolve the team entirely.

Bender told Ars that the decision is particularly disappointing because Microsoft managed to gather some really great people working on AI, ethics, and societal impact for technology.” She said that, for a while, it seemed like the team was “actually even fairly empowered at Microsoft.” But Bender said that with this move, Microsoft “basically says” that if the company perceives the ethics and society team recommendations “as contrary to what’s gonna make us money in the short term, then they gotta go.

To experts like Bender, it seems like Microsoft is now less interested in funding a team dedicated to telling the company to slow down when AI models might carry risksincluding legal risks. One employee told Platformer that they wondered what would happen to both the brand and to users now that there was seemingly no one to say no when potentially irresponsible designs were pushed to users.

The worst thing is weve exposed the business to risk and human beings to risk, one former employee told Platformer. Advertisement The shaky future of responsible AI

When the company relaunched Bing with AI, users quickly discovered the Bing Chat tool had unexpected behaviorsgenerating conspiracies, spouting misinformation, and even seemingly slandering people.Up until now, tech companies like Microsoft and Google have been trusted to self-regulate releases of AI tools, identifying risks and mitigating harms. But Benderwho coauthored the paper with former Google ethics AI researcher Timnit Gebru that resulted in Gebru being fired for criticizing large language models that many AI tools depend ontold Ars that self-regulation as a model is not going to work.

There has to be external pressure to invest in responsible AI teams, Bender told Ars.

Bender advocates for regulators to get involved at this point, if society wants more transparency from companies amidst the current wave of AI hype. Otherwise users risk jumping on bandwagons to use popular toolslike they did with AI-powered Bing, which now has 100 million monthly active userswithout a solid understanding of how users could be harmed by those tools.

I think that every user who encounters this needs to have a really clear idea of what it is that they’re working with, Bender told Ars. And I don’t see any companies doing a good job of that.

Bender said its frightening that companies seem consumed by capitalizing on AI hype, which claims that AI is going to be as big and disruptive as the Internet was. Instead, companies have a duty to think about what could go wrong.

At Microsoft, that duty now falls to the Office of Responsible AI, a spokesperson told Ars.

We have also increased the scale and scope of our Office of Responsible AI, which provides cross-company support for things like reviewing sensitive use cases and advocating for policies that protect customers, Microsofts spokesperson said.

To Bender, a better solution than depending on companies like Microsoft to do the right thing is for society to advocate for regulationsnot to micromanage specific technologies, but rather to establish and protect rights in an enduring way, she tweeted.

Until there are proper regulations in place, more transparency about potential harms, and better information literacy among users, Bender recommends that users never accept AI medical advice, legal advice, psychotherapy, or other sensitive applications of AI.

It strikes me as very, very short-sighted, Bender said of the current AI hype. reader comments 70 with Share this story Share on Facebook Share on witter Share on Reddit Ashley Belanger Ashley Belanger is the senior tech policy reporter at Ars Technica, writing news and feature stories on tech policy and innovation. She is based in Chicago. Email ashley.belanger@arstechnica.com // Twitter @ashleynbelanger Advertisement Channel Ars Technica ← Previous story Next story → Related Stories Today on Ars

Articles You May Like

Senator praises Trumps energy dream team, says they have little time to avoid looming crisis
Consumer watchdog sues major US bank claiming it cheated customers
How Biden Destroyed His Legacy
Which insurance companies have the most exposure in California?
Trump’s win sends small business optimism soaring to 6-year high ahead of inauguration