
How-ToMachine Learning
Model Poisoning Turns Helpful AI Into a Trojan Horse
via HackernoonFelix Koole
Model poisoning is the malicious manipulation of a machine learning model's training data or parameters to embed hidden, "backdoor" behaviors. The attack works in four steps: Poisoning the weights, triggering triggers, exfiltrating data, and hiding the data.
Continue reading on Hackernoon
Opens in a new tab
5 views



