OpenAI’s latest AI models have a new safeguard to prevent biorisks

  • staffstaff
  • AI
  • April 16, 2025
  • 0 Comments

OpenAI says that it deployed a new system to monitor its latest AI reasoning models, o3 and o4-mini, for prompts related to biological and chemical threats. The system aims to prevent the models from offering advice that could instruct someone on carrying out potentially harmful attacks, according to OpenAI’s safety report. O3 and o4-mini represent […]

  • Related Posts

    NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

    The NAACP is calling on local officials to halt operations at Colossus, the “supercomputer” facility operated by Elon Musk’s xAI in South Memphis. As reported in NBC News, leaders from…

    Continue reading
    Meta plans to automate many of its product risk assessments

    An AI-powered system could soon take responsibility for evaluating the potential harms and privacy risks of up to 90% of updates made to Meta apps like Instagram and WhatsApp, according…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    NAACP calls on Memphis officials to halt operations at xAI’s ‘dirty data center’

    • By staff
    • May 31, 2025
    • 1 views

    Meta plans to automate many of its product risk assessments

    • By staff
    • May 31, 2025
    • 2 views

    Serious About AI? The conversations that count start in 5 days at TechCrunch Sessions: AI

    • By staff
    • May 31, 2025
    • 3 views

    Google quietly released an app that lets you download and run AI models locally

    • By staff
    • May 31, 2025
    • 3 views