/plushcap/analysis/deepgram/how-adversarial-examples-build-resilient-machine-learning-models

How Adversarial Examples Build Resilient Machine Learning Models

What's this blog post about?

Adversarial examples are slight alterations to input data that cause AI models to produce incorrect outputs, often unnoticeable to humans but vast to ML models. These adversarial attacks can be problematic for safety-critical applications like self-driving vehicles and cancer screening. Researchers are investigating ways to make AI less vulnerable to these attacks by detecting, understanding, and defending against them. Some approaches include data poisoning and creating physical objects that resist object detection algorithms or facial recognition systems.

Company
Deepgram

Date published
March 30, 2023

Author(s)
Brad Nikkel

Word count
1716

Language
English

Hacker News points
None found.


By Matt Makai. 2021-2024.