Making AI models ‘forget’ undesirable data hurts their performance

  • AI
  • July 29, 2024
  • 0 Comments

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]

© 2024 TechCrunch. All rights reserved. For personal use only.

  • Related Posts

    Jony Ive to lead OpenAI’s design work following $6.5B acquisition of his company

    Famed Apple product designer Jony Ive will now lead creative and design work at OpenAI, the result on an unusual deal announced on Wednesday. OpenAI CEO Sam Altman and Ive…

    Continue reading
    Meta launches program to encourage startups to use its Llama AI models

    Meta is launching a new program to incentivize startups to adopt its Llama AI models. The program, Llama for Startups, provides companies “direct support” from Meta’s Llama team, as well…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    Meta launches program to encourage startups to use its Llama AI models

    • By staff
    • May 21, 2025
    • 0 views

    Jony Ive to lead OpenAI’s design work following $6.5B acquisition of his company

    • By staff
    • May 21, 2025
    • 0 views

    Tensions flare between the US and China over Huawei’s AI chips

    • By staff
    • May 21, 2025
    • 1 views

    Google is bringing ads to AI Mode

    • By staff
    • May 21, 2025
    • 1 views