New attack vector: poisoning training datasets for machine learning

  • 22 February 2023
  • 5 replies
  • 41 views

  • Anonymous
  • 0 replies

From the abstract:

Deep learning models are often trained on distributed, webscale datasets crawled from the internet. In this paper, we introduce two new dataset poisoning attacks that intentionally introduce malicious examples to a model's performance. Our attacks are immediately practical and could, today, poison 10 popular datasets. Our first attack, split-view poisoning, exploits the mutable nature of internet content to ensure a dataset annotator's initial view of the dataset differs from the view downloaded by subsequent clients. By exploiting specific invalid trust assumptions, we show how we could have poisoned 0.01% of the LAION-400M or COYO-700M datasets for just $60 USD. Our second attack, frontrunning poisoning, targets web-scale datasets that periodically snapshot crowd-sourced content -- such as Wikipedia -- where an attacker only needs a time-limited window to inject malicious examples. In light of both attacks, we notify the maintainers of each affected dataset and recommended several low-overhead defenses.

https://arxiv.org/abs/2302.10149

I found it on here: https://news.ycombinator.com/item?id=34889336 which had a fun SMBC comic on the topic:

https://www.smbc-comics.com/comic/artificial-incompetence

 

 


5 replies

It is concerning that recent studies have shown vulnerabilities in machine learning models that can compromise their integrity and accuracy, especially in the context of cyber-security. Data poisoning attacks involving label-flipping are a specific type of attack that can occur during the training phase of a machine learning model. The goal of the attacker is to corrupt the model's accuracy by introducing mislabeled data into the training set, leading to misclassifications and reduced performance.

This paper proposes two new types of label-flipping attacks that are model-agnostic and can be used to compromise a wide range of machine learning classifiers used for malware detection using mobile exfiltration data. The first attack randomly flips labels, while the second attack targets only one class in particular.

The paper analyzes the effects of each attack, focusing on the accuracy drop and misclassification rate of the targeted models. It is crucial to understand the impact of these attacks to develop effective defense techniques.

Finally, the paper suggests the development of defense techniques that can provide robustness to machine learning models against potential attackers. This is an important area of research that can help mitigate the impact of data poisoning attacks and ensure the integrity of machine learning models used in various applications.

 

 

And speaking of deep learning, here’s someone who used an AI voice simulation of himself to get into his phone banking system as a test.  A good reminder that standalone biometrics are not a good security solution!

https://www.vice.com/en/article/dy7axa/how-i-broke-into-a-bank-account-with-an-ai-generated-voice

Userlevel 4
Badge +3

To mitigate the risk of poisoning attacks, it's important to implement proper security controls throughout the entire machine learning lifecycle. This includes implementing measures such as data validation and verification, maintaining the integrity of the training data, and using techniques such as outlier detection and data filtering to detect and mitigate any malicious data. Additionally, it's important to implement proper access controls to prevent unauthorized access to the training dataset and other machine learning resources.

Ultimately, the key to preventing poisoning attacks is to maintain a strong focus on data security and to implement robust security measures at every stage of the machine learning process. This includes not only the training data, but also the algorithms, models, and other resources used to develop and deploy machine learning applications.

Another good article on the attack surface of AI and ML:

https://www.csoonline.com/article/3690416/why-red-team-for-ai-should-be-on-cisos-radars.html

Also this chart is scary!

 

Reply