A New Approach in Mitigating Adversarial Attacks on Machine Learning

Machine learning is a powerful tool that has the potential to transform many industries, and thus is open to security attacks. Such attacks on machine learning algorithms are known as adversarial attacks. Adversarial attacks are designed to deceive or mislead machine learning models by introducing m...

Full description

Bibliographic Details
Published in:IEEE Symposium on Wireless Technology and Applications, ISWTA
Main Author: Ahmad A.A.I.; Jalil K.A.
Format: Conference paper
Language:English
Published: IEEE Computer Society 2024
Online Access:https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203826522&doi=10.1109%2fISWTA62130.2024.10652080&partnerID=40&md5=1b1cfc089228f09a09ddab27c2cf44a6
id 2-s2.0-85203826522
spelling 2-s2.0-85203826522
Ahmad A.A.I.; Jalil K.A.
A New Approach in Mitigating Adversarial Attacks on Machine Learning
2024
IEEE Symposium on Wireless Technology and Applications, ISWTA


10.1109/ISWTA62130.2024.10652080
https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203826522&doi=10.1109%2fISWTA62130.2024.10652080&partnerID=40&md5=1b1cfc089228f09a09ddab27c2cf44a6
Machine learning is a powerful tool that has the potential to transform many industries, and thus is open to security attacks. Such attacks on machine learning algorithms are known as adversarial attacks. Adversarial attacks are designed to deceive or mislead machine learning models by introducing malicious input data, modifying existing data, or exploiting weaknesses in the algorithms used to train the models. These attacks can be targeted, deliberate, and sophisticated, leading to serious consequences such as incorrect decision-making, data breaches, and loss of intellectual property. Poisoning attacks, evasion attacks, model stealing, and model inversion attacks are some examples of adversarial attacks. At the moment, most researchers are focusing on a defense approach to mitigate these attacks. This approach aims to create a strong defense system that can detect and respond to attacks in real-time, prevent unauthorized access to systems and data, and mitigate the impact of security breaches. Unfortunately, this approach has some disadvantages, one of which is limited effectiveness. Despite the use of multiple defense measures, determined attackers can still find ways to breach systems and access sensitive data. This is due to the nature of the defense approach, which never addresses the root of the problem and thus can lead to the repetition of such attacks. In this paper, a new approach is proposed, namely using the forensic approach. The proposed approach will investigate attacks against machine learning, identify the root cause of the attack, determine the extent of the damage, and gather information that can be used to prevent similar incidents in the future. © 2024 IEEE.
IEEE Computer Society
23247843
English
Conference paper

author Ahmad A.A.I.; Jalil K.A.
spellingShingle Ahmad A.A.I.; Jalil K.A.
A New Approach in Mitigating Adversarial Attacks on Machine Learning
author_facet Ahmad A.A.I.; Jalil K.A.
author_sort Ahmad A.A.I.; Jalil K.A.
title A New Approach in Mitigating Adversarial Attacks on Machine Learning
title_short A New Approach in Mitigating Adversarial Attacks on Machine Learning
title_full A New Approach in Mitigating Adversarial Attacks on Machine Learning
title_fullStr A New Approach in Mitigating Adversarial Attacks on Machine Learning
title_full_unstemmed A New Approach in Mitigating Adversarial Attacks on Machine Learning
title_sort A New Approach in Mitigating Adversarial Attacks on Machine Learning
publishDate 2024
container_title IEEE Symposium on Wireless Technology and Applications, ISWTA
container_volume
container_issue
doi_str_mv 10.1109/ISWTA62130.2024.10652080
url https://www.scopus.com/inward/record.uri?eid=2-s2.0-85203826522&doi=10.1109%2fISWTA62130.2024.10652080&partnerID=40&md5=1b1cfc089228f09a09ddab27c2cf44a6
description Machine learning is a powerful tool that has the potential to transform many industries, and thus is open to security attacks. Such attacks on machine learning algorithms are known as adversarial attacks. Adversarial attacks are designed to deceive or mislead machine learning models by introducing malicious input data, modifying existing data, or exploiting weaknesses in the algorithms used to train the models. These attacks can be targeted, deliberate, and sophisticated, leading to serious consequences such as incorrect decision-making, data breaches, and loss of intellectual property. Poisoning attacks, evasion attacks, model stealing, and model inversion attacks are some examples of adversarial attacks. At the moment, most researchers are focusing on a defense approach to mitigate these attacks. This approach aims to create a strong defense system that can detect and respond to attacks in real-time, prevent unauthorized access to systems and data, and mitigate the impact of security breaches. Unfortunately, this approach has some disadvantages, one of which is limited effectiveness. Despite the use of multiple defense measures, determined attackers can still find ways to breach systems and access sensitive data. This is due to the nature of the defense approach, which never addresses the root of the problem and thus can lead to the repetition of such attacks. In this paper, a new approach is proposed, namely using the forensic approach. The proposed approach will investigate attacks against machine learning, identify the root cause of the attack, determine the extent of the damage, and gather information that can be used to prevent similar incidents in the future. © 2024 IEEE.
publisher IEEE Computer Society
issn 23247843
language English
format Conference paper
accesstype
record_format scopus
collection Scopus
_version_ 1812871795689127936