OpenAI may ‘adjust’ its safeguards if rivals release ‘high-risk’ AI

  • staffstaff
  • AI
  • April 15, 2025
  • 0 Comments

In an update to its Preparedness Framework, the internal framework OpenAI uses to decide whether AI models are safe and what safeguards, if any, are needed during development and release, OpenAI said that it may “adjust” its requirements if a rival AI lab releases a “high-risk” system without comparable safeguards. The change reflects the increasing […]

  • Related Posts

    Anthropic CEO wants to open the black box of AI models by 2027

    Anthropic CEO Dario Amodei published an essay Thursday highlighting how little researchers understand about the inner workings of the world’s leading AI models. To address that, Amodei set an ambitious goal for…

    Continue reading
    How do you define cheating in the age of AI?

    This AI startup raised $5.3 million to help people “cheat on everything.” But in the age of AI, how do you define cheating? Columbia University recently suspended student Roy Lee…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Anthropic CEO wants to open the black box of AI models by 2027

    • By staff
    • April 25, 2025
    • 1 views

    OpenAI rolls out a ‘lightweight’ version of its ChatGPT deep research tool

    • By staff
    • April 24, 2025
    • 1 views

    How do you define cheating in the age of AI?

    • By staff
    • April 24, 2025
    • 1 views

    Perplexity CEO says its browser will track everything users do online to sell ‘hyper personalized’ ads

    • By staff
    • April 24, 2025
    • 1 views