OpenAI pledges to publish AI safety test results more often

OpenAI is moving to publish the results of its internal AI model safety evaluations more regularly in what the outfit is saying is an effort to increase transparency. On Wednesday, OpenAI launched the Safety evaluations hub, a web page showing how the company’s models score on various tests for harmful content generation, jailbreaks, and hallucinations. […]

  • Related Posts

    Google says its updated Gemini 2.5 Pro AI model is better at coding

    Google on Thursday announced an update to its Gemini 2.5 Pro preview model that the company claims is better at certain programming tasks. The company’s calling it an “updated preview,”…

    Continue reading
    Anthropic unveils custom AI models for U.S. national security customers

    Anthropic says that it has released a new set of AI models tailored for U.S. national security customers. The new models, a custom set of “Claude Gov” models, were “built…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Google says its updated Gemini 2.5 Pro AI model is better at coding

    • By staff
    • June 5, 2025
    • 0 views

    The founder of DeviantArt is making a $22,000 display for digital art

    • By staff
    • June 5, 2025
    • 1 views

    Anthropic unveils custom AI models for U.S. national security customers

    • By staff
    • June 5, 2025
    • 1 views

    Collibra acquires data access startup Raito

    • By staff
    • June 5, 2025
    • 1 views

    TechCrunch Sessions: AI launches in Berkeley today — here’s what you’ll miss if you’re not here

    • By staff
    • June 5, 2025
    • 2 views

    Toma’s AI voice agents have taken off at car dealerships – and attracted funding from a16z

    • By staff
    • June 5, 2025
    • 1 views