Making AI models ‘forget’ undesirable data hurts their performance

  • AI
  • July 29, 2024
  • 0 Comments

So-called “unlearning” techniques are used to make a generative AI model forget specific and undesirable info it picked up from training data, like sensitive private data or copyrighted material. But current unlearning techniques are a double-edged sword: They could make a model like OpenAI’s GPT-4o or Meta’s Llama 3.1 405B much less capable of answering […]

© 2024 TechCrunch. All rights reserved. For personal use only.

  • Related Posts

    AI startup Perplexity sued for alleged trademark infringement

    Perplexity, the venture-backed startup building AI-powered search products, has been sued in federal court for allegedly violating another company’s trademark. In a complaint filed Thursday in the U.S. District Court…

    Continue reading
    DeepSeek lights a fire under Silicon Valley

    DeepSeek, DeepSeek, DeepSeek. We couldn’t escape the headlines around the Chinese AI lab this week. The startup lit a fire under Silicon Valley after releasing its R1 “reasoning” model and…

    Continue reading

    Leave a Reply

    Your email address will not be published. Required fields are marked *

    You Missed

    AI startup Perplexity sued for alleged trademark infringement

    • By staff
    • January 31, 2025
    • 0 views

    DeepSeek lights a fire under Silicon Valley

    • By staff
    • January 31, 2025
    • 0 views

    TechCrunch Disrupt 2025: Last 24 hours for 2-for-1 Pass

    • By staff
    • January 31, 2025
    • 0 views

    Apple Intelligence will support more languages from April

    • By staff
    • January 31, 2025
    • 2 views

    Apple CEO says DeepSeek shows ‘innovation that drives efficiency’

    • By staff
    • January 31, 2025
    • 1 views

    Intel has already received $2.2B in federal grants for chip production

    • By staff
    • January 31, 2025
    • 1 views