Over the past year, OpenAI has cemented its place as one of the most powerful tech startups in the world.
Its release of ChatGPT heralded an artificial intelligence revolution that sent shockwaves through nearly every industry, and the public remains both enamored and terrified by the possibilities it has unleashed.
But the company creating this technology, with an estimated valuation as high as $90 billion, has also come under fire recently for a glaring lack of diversity within its current governing body.
After a brief corporate explosion last month that saw CEO Sam Altman ousted and reinstated within a week, OpenAI has said that the company is back to focusing on its core mission with a reconstituted board of directors.
The saga resulted in the departure of the board’s only women directors, and it now consists of just three white men. Two of them largely fit the mold of a Silicon Valley “tech bro.” The third, an East Coast economist, has made controversial statements about women in the past.
The board’s lack of diversity appears to be at odds with OpenAI’s publicly-stated mission, which the company says is meant to ensure that artificial general intelligence “benefits all of humanity.”
A growing chorus of voices inside and outside the tech industry are now questioning how OpenAI can achieve this lofty goal without including people with diverse backgrounds on its overseeing body. And they are increasingly pointing out that the stakes could not be higher.
Even lawmakers in Washington are starting to raise alarms over this issue.
“We strongly encourage OpenAI to move expeditiously in diversifying its board,” Reps. Emanuel Cleaver (D-Mo.) and Barbara Lee (D-Calif.) wrote to Altman and the board earlier this week in a letter that was obtained by CNN.
“The AI industry’s lack of diversity and representation is deeply intertwined with the problems of bias and discrimination in AI systems,” the duo of Black lawmakers added.
Margaret Mitchell, a longtime AI researcher who founded Google’s Ethical AI team before being fired amid a controversy that rocked the tech industry back in 2021, told CNN that the only way to advance AI in a way that’s most beneficial to people all over the world is to actually have people at the table with different life experiences.
“I don’t have faith that OpenAI, as I currently understand it, is well placed to create technology that ‘benefits all of humanity,’” Mitchell, who currently works as the chief ethics scientist at developer-focused AI firm Hugging Face, told CNN. “In part because even that phrase suggests a more reductive approach to what humanity wants.”
If anything, she said, the mission “reminds me more of things like white savior complex,” referring to an ideology that posits some white people can develop a belief that it’s their role to know best and protect communities of color.
“If we’re trying to achieve technology that reflects the viewpoints of predominantly rich, white men in Silicon Valley, then we’re doing a great job at that,” Mitchell said. “But I would argue that we could do better.”
AI-powered tools are already infiltrating key areas of people’s everyday lives.
They are “determining who gets hired, who gets medical insurance, who gets a mortgage, and even who gets a date,” said Dr. Joy Buolamwini, the founder of the Algorithmic Justice League, an organization tracking the harms of artificial intelligence.
“When AI systems are used as the gatekeeper of opportunities, it is critical that the oversight of the design, development, and deployment of these systems reflect the communities that will be impacted by them,” Buolamwini, who is also the author of “Unmasking AI: My Mission to Protect What is Human in a World of Machines,” added.
With women and people of color currently comprising the “global majority,” she added, “their absence in AI governance at any level undermines efforts to build robust and responsible AI systems.”
At the same time, Buolamwini pointed out that research continues to show racism and sexism “are being baked into AI systems.”
Large language models, the technology underpinning generative AI tools like ChatGPT, are trained on vast troves of data. With much of that data written by humans and coming from the internet, generative AI tools carry the risk of further spreading the all-too-human biases already entrenched in internet discourse, but at a frighteningly larger scale.
OpenAI, for its part, has said that the current board (which consists of of Bret Taylor, former co-chief executive at Salesforce and former board chair at Twitter; former Treasury Secretary Larry Summers; and Adam D’Angelo, chief executive of online Q&A platform Quora) is only “initial.”
Summers famously caused an uproar at Harvard nearly 20 years ago when he made comments, which he later apologized for, that seemed to suggest innate differences in the sexes were holding back women in science and engineering.
Taylor, the chair of the board, said in a statement to CNN through a representative that, “Of course, Larry, Adam and I strongly believe diversity is essential as we move forward in building the OpenAI board.”
“We are committed to forming a diverse board,” the statement added. The company has not offered a timeline for when it will bring on new board members.
In a blogpost announcing his own return as chief executive, Altman said that one of the first goals of the current cohort includes the “extremely important task of building out a board of diverse perspectives.”
As has become a trend in Silicon Valley and across the corporate landscape, OpenAI publicly touts an ongoing “investment in diversity, equity and inclusion.”
The company says this is “executed through a wide range of initiatives, owned by everyone across the company, and championed and supported by leadership.”
And outside its board, OpenAI’s current leadership team also includes a handful of women in leadership roles, including Mira Murati as its chief technology officer. For a very brief moment amid the chaos and before Altman’s return, Murati was named interim CEO.
Some news reports suggest recent board members Tasha McCauley and Helen Toner were involved in voting Altman out of the company after clashes between Toner and Altman. The public ousting of the only two women on its board at that time prompted all kinds of questions about the inner-workings of the privately-held juggernaut.
The weeks since the closely-watched leadership overhaul that wrapped just before Thanksgiving have given some industry watchers the space to creatively imagine what a new board could look like, and how OpenAI can better achieve its stated mission.
As the company looks to add diverse perspectives to its board, Buolamwini notes that it’s important to “keep in mind that having a seat at the table is not enough” if it doesn’t also come with “decision-making power.”
“Being in the room as just window dressing feeds into tokenism,” she said, and empty representation can be just as harmful as no representation at all because it can be used “to thwart scrutiny without making change.”
Mitchell added that OpenAI can also start diversifying its board by looking beyond just players in the tech industry and recruit outsiders who aren’t afraid to point out some of the perspectives Silicon Valley’s elite might miss.
“It’s by ruffling feathers that we can fundamentally change the system to be more inclusive,” Mitchell said.
If it truly seeks to achieve its mission, critics say that OpenAI could also start by looking at where its technology is already causing an outsized amount of harm.
Artists and creatives, for example, have spent the past year in a fight for their future amid the proliferation of AI tools that threaten to not only put them out of work, but copy their creative likeness. A conflict like that could begin to be addressed by OpenAI perhaps giving a working artist a seat on its board, Mitchell suggested.
As AI tools are expected to upend much of the way we work in coming years, some suggest it could be beneficial to hear from labor leaders about how to best ensure people aren’t losing their livelihoods as this technology becomes more powerful.
Microsoft, a big-time backer of OpenAI with a $13 billion investment, took a surprise step in this direction earlier this week when it announced a first-of-its kind partnership with the AFL-CIO.
The software giant promised the alliance includes an “open dialogue” with the union leaders about AI’s impact on the future of work.
Ultimately, Buolamwini notes that impactful AI governance “is not about one company or one board.”
“Self-regulation for such consequential technology is not sufficient,” Buolamwini added. “I would challenge governments around the world to put in serious legislation that protects people from AI harms.”
Read the full article here
Leave a Reply