Making AI models ‘forget’ undesirable data hurts their performance

  • AI
  • July 29, 2024
  • 0 Comments

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]

© 2024 TechCrunch. All rights reserved. For personal use only.

  • Related Posts

    Klarna used an AI avatar of its CEO to deliver earnings, it said

    Other than AI Siemiatkowski admission, it wasn’t obvious that this was AI. There were only a few subtle signs.

    Continue reading
    LM Arena, the organization behind popular AI leaderboards, lands $100M

    LM Arena, a crowdsourced benchmarking project that major AI labs rely on to test and market their AI models, has raised $100 million in a seed funding round that values…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Klarna used an AI avatar of its CEO to deliver earnings, it said

    • By staff
    • May 21, 2025
    • 1 views

    LM Arena, the organization behind popular AI leaderboards, lands $100M

    • By staff
    • May 21, 2025
    • 2 views

    Meta launches program to encourage startups to use its Llama AI models

    • By staff
    • May 21, 2025
    • 2 views

    Jony Ive to lead OpenAI’s design work following $6.5B acquisition of his company

    • By staff
    • May 21, 2025
    • 2 views

    Tensions flare between the US and China over Huawei’s AI chips

    • By staff
    • May 21, 2025
    • 2 views

    Google is bringing ads to AI Mode

    • By staff
    • May 21, 2025
    • 1 views