
Adversarial Attacks and Defenses in Deep Learning Systems: Threats, Mechanisms, and Countermeasures
After listening to the presentation on adversarial attacks, I feel like I understand AI much better than before. Even though AI looks very powerful and is used in many areas such as self-driving cars, medical image analysis, and computer vision systems, it actually has some important weaknesses. The main issue is that AI does not truly “understand” what it sees like humans do. Instead, it processes everything as numbers, like pixel values in an image, which are then treated as vectors. The model learns patterns from this numerical data, not meaning. Because of this, even a very small change in the input—something humans cannot even notice—can shift the data enough to change the model’s decision completely. This is what we call adversarial perturbation. A good example is the Panda experiment, where adding a tiny amount of noise to an image causes the model to classify a panda as a gibbon, and even with higher confidence than before. This clearly shows that the model is not really unders
Continue reading on Dev.to Webdev
Opens in a new tab



