Who’s the boss? OpenAI board members can now overrule Sam Altman on safety of new AI releases by ProductSellerMarket

OpenAI has disclosed guidelines indicating that its board has the authority to block the release of an AI model, even if the company’s leadership deems it safe. The move reflects OpenAI’s commitment to empowering its directors to enhance safeguards for advanced AI technology.

The guidelines, released on Monday, address how OpenAI plans to address extreme risks associated with its most potent AI systems.

Following recent leadership turmoil that saw CEO Sam Altman briefly ousted by the board, OpenAI is emphasising a balance of power between directors and the company’s executive team.

The guidelines outline the responsibilities of OpenAI’s “preparedness” team, which evaluates AI systems across four categories, including potential cybersecurity issues and threats related to chemical, nuclear, and biological factors.

The company is particularly vigilant about “catastrophic” risks, defined as those with the potential for hundreds of billions of dollars in economic damage or severe harm or death to many individuals.

The preparedness team, led by Aleksander Madry from the Massachusetts Institute of Technology, will submit monthly reports to a new internal safety advisory group.

This advisory group will analyze the team’s findings and provide recommendations to Altman and the board.

While Altman and his leadership team can decide whether to release a new AI system based on these reports, the board has the authority to override that decision, according to the guidelines.

OpenAI established the “preparedness” team in October, forming one of three groups overseeing AI safety at the company. The other groups include “safety systems,” focusing on current products like GPT-4, and “superalignment,” concentrating on potential future AI systems with exceptional power.

Madry’s team will continuously evaluate OpenAI’s most advanced, unreleased AI models, categorizing them as “low,” “medium,” “high,” or “critical” for various types of perceived risks.

Models rated “medium” or “low” are the only ones OpenAI will release, according to the guidelines.

Madry hopes that other companies will adopt OpenAI’s guidelines to assess potential risks associated with their AI models. The guidelines formalize processes that OpenAI has previously followed when evaluating AI technology, with input and feedback gathered from within the organization over the past few months.

(With inputs from agencies)


  • shop.softwaretechit.com
  • blog.softwaretechit.com
  • home.softwaretechit.com
  • softwaretechit.com
  • Labels : #gadgets ,#new ,#review ,

    Post a Comment