Many safety evaluations for AI models have significant limitations

  • AI
  • August 4, 2024
  • 0 Comments

Despite increasing demand for AI safety and accountability, today’s tests and benchmarks may fall short, according to a new report. Generative AI models — models that can analyze and output text, images, music, videos and so on — are coming under increased scrutiny for their tendency to make mistakes and generally behave unpredictably. Now, organizations […]

© 2024 TechCrunch. All rights reserved. For personal use only.

  • Related Posts

    Sam Altman: OpenAI has been on the ‘wrong side of history’ concerning open source

    To cap off a day of product releases, OpenAI researchers, engineers, and executives, including OpenAI CEO Sam Altman, answered questions in a wide-ranging Reddit AMA on Friday. OpenAI the company…

    Continue reading
    Guo’s Conviction Partners adds Mike Vernal as GP, raises $230M fund

    When in mid-2022 Sarah Guo left Greylock to launch her own AI-focused fund, Conviction Partners, she indicated that she was tagging the word “Partners” to the firm’s name because she…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Guo’s Conviction Partners adds Mike Vernal as GP, raises $230M fund

    • By staff
    • February 1, 2025
    • 1 views

    Sam Altman: OpenAI has been on the ‘wrong side of history’ concerning open source

    • By staff
    • February 1, 2025
    • 0 views

    OpenAI used this subreddit to test AI persuasion

    • By staff
    • February 1, 2025
    • 0 views

    Mistral board member and a16z VC Anjney Midha says DeepSeek won’t stop AI’s GPU hunger

    • By staff
    • January 31, 2025
    • 1 views

    ‘Hundreds’ of companies are blocking DeepSeek over China data risks

    • By staff
    • January 31, 2025
    • 2 views

    MLCommons and Hugging Face team up to release massive speech data set for AI research

    • By staff
    • January 31, 2025
    • 2 views