NIST releases a tool for testing AI model risk

  • AI
  • July 27, 2024
  • 0 Comments

The National Institute of Standards and Technology (NIST), the U.S. Commerce Department agency that develops and tests tech for the U.S. government, companies and the broader public, has re-released a testbed designed to measure how malicious attacks — particularly attacks that “poison” AI model training data — might degrade the performance of an AI system. […]

© 2024 TechCrunch. All rights reserved. For personal use only.

  • Related Posts

    Building Your AI Engine: How OpenAI Works with Startups

    In the rapidly evolving AI landscape, startups can gain a competitive edge by collaborating closely with model providers. Join Hao Sang from OpenAI’s Startups Team demystifies OpenAI’s resources for startups,…

    Continue reading
    Building More Scalable GenAI Applications for Startups and Developers

    In this TechCrunch Sessions: AI event, Oracle shares a rundown of how MySQL HeatWave empowers you in building AI-based solutions in areas such as personal productivity, automating workflows for compliance,…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Building More Scalable GenAI Applications for Startups and Developers

    • By staff
    • June 7, 2025
    • 1 views

    Building Your AI Engine: How OpenAI Works with Startups

    • By staff
    • June 7, 2025
    • 1 views

    The best ways to build on top of foundation models, with DeepMind, Twelve Labs, and Amazon

    • By staff
    • June 6, 2025
    • 1 views

    Disruption playbook: How to beat AI incumbents at their own game

    • By staff
    • June 6, 2025
    • 3 views