< Back to 68k.news MX front page

OpenAI founders Sam Altman and Greg Brockman go on the defensive after top safety researchers quit

Original source (on modern site)

Two of OpenAI's founders, CEO Sam Altman and President Greg Brockman, are on the defensive after a shake-up in the company's safety department this week.

The company's chief scientist, Ilya Sutskever, who is also a founder, announced on X that he was leaving on Tuesday. Hours later, his colleague, Jan Leike, followed suit.

Sutskever and Leike led OpenAI's super alignment team, which was focused on developing AI systems compatible with human interests. That sometimes placed them in opposition with members of the company's leadership who advocated for more aggressive development.

"I have been disagreeing with OpenAI leadership about the company's core priorities for quite some time, until we finally reached a breaking point," Leike wrote on X on Friday.

Sutskever was among the six board members who tried to oust Altman as CEO in November, though he later said he regretted the move.

After their departures, Altman called Sutskever "one of the greatest minds of our generation" and said he was "super appreciative" of Leike's contributions in posts on X. He also said Leike was right: "We have a lot more to do; we are committed to doing it."

But as public concern continued to mount, Brockman offered more details on Saturday about how OpenAI will approach safety and risk moving forward — especially as it develops artificial general intelligence and builds AI systems that are more sophisticated than chatbots. 

In a nearly 500-word post on X that both he and Altman signed, Brockman addressed the steps OpenAI has already taken to ensure the safe development and deployment of the technology.

We're really grateful to Jan for everything he's done for OpenAI, and we know he'll continue to contribute to the mission from outside. In light of the questions his departure has raised, we wanted to explain a bit about how we think about our overall strategy.

First, we have… https://t.co/djlcqEiLLN

— Greg Brockman (@gdb) May 18, 2024

"We've repeatedly demonstrated the incredible possibilities from scaling up deep learning and analyzed their implications; called for international governance of AGI before such calls were popular; and helped pioneer the science of assessing AI systems for catastrophic risks," Brockman wrote.

Altman recently said the best way to regulate AI would be an international agency that ensures reasonable safety testing but also expressed wariness of regulation by government lawmakers who may not fully understand the technology. 

Brockman said OpenAI has also established the foundations for safely deploying AI systems more capable than GPT-4.

"As we build in this direction, we're not sure yet when we'll reach our safety bar for releases, and it's ok if that pushes out release timelines," Brockman wrote.

Brockman and Altman added in their post that the best way to anticipate threats is through a "very tight feedback loop, rigorous testing, careful consideration at every step, world-class security, and harmony of safety and capabilities," as well as collaborating with "governments and many stakeholders on safety."

But not everyone is convinced that the OpenAI team is moving ahead with development in a way that ensures the safety of humans, least of all, it seems, the people who, up to a few days ago, led the company's effort in that regard.

"These problems are quite hard to get right, and I am concerned we aren't on a trajectory to get there," Leike said.

< Back to 68k.news MX front page