OpenAI's Superalignment team is developing control strategies for super-intelligent AI

OpenAI claims it's making progress in its capabilities to manage super-intelligent AI systems, according to the latest WIRED report. Its Superalignment team, led by OpenAI's chief scientist Ilya Sutskever has devised a method to guide the behaviour of AI models as they become more intelligent.

OpenAI's Superalignment team is developing control strategies for super-intelligent AI
The Superalignment team, established in July, focuses on the task of ensuring that AI remains safe and beneficial as it moves closer to and even surpasses human intelligence. "AGI is fast approaching," Leopold Aschenbrenner, the head of research at OpenAI, told WIRED. "We're going to see models that are superhuman with vast capabilities, and they're also extremely, very risky, and we're not yet equipped with the means to manage these models."

OpenAI's latest research paper outlines a technique called supervision, where a less-developed AI model controls the behavior of a more advanced one. This technique aims to preserve the capabilities of the model while also ensuring that it follows the guidelines of safety and ethics. The approach is seen as the most important step in managing possible superhuman AIs.

The tests involved using OpenAI's GPT-2 Text Generator to teach GPT-4 which is a much more advanced system. The researchers tested two methods to ensure the stability in the performance of GPT-4. The first involved training more and larger models, while the second one added an algorithmic tweak to GPT-4. This method proved to be more efficient, though the researchers acknowledge that perfect behavior control is not yet guaranteed.

Industry response and future directions
Dan Hendryks, director of the Center for AI Safety, applauded OpenAI's proactive approach to controlling superhuman AIs. This Superalignment project is viewed as a significant first step, however further studies and research are required to ensure that control systems are effective.

OpenAI plans to dedicate the majority of its computing power towards the Superalignment project, and is appealing for collaboration from outside. The company, in partnership with Eric Schmidt, is offering $10 million in grants for researchers working on AI control techniques. Furthermore, there will be a conference on superalignment next year to further explore this crucial area.

Ilya Sutskever, a co-founder of OpenAI and a key player in the company's advancements in technology as well as the Superalignment team. His participation in the project is crucial, especially following the recent crisis in governance at OpenAI. Sutskever's knowledge and leadership skills helped to move this project to its next level.

The development of methods to manage super-intelligent AI is a complex and urgent issue. As AI technology continues to advance the need to ensure that it is in line with the values of humans and security becomes more crucial. OpenAI's effort in this direction represents a significant milestone, however, the road to secure and effective AI control systems is ongoing and requires collaborative efforts from the entire AI study community.