Summary: | Machine learning is a powerful tool that has the potential to transform many industries, and thus is open to security attacks. Such attacks on machine learning algorithms are known as adversarial attacks. Adversarial attacks are designed to deceive or mislead machine learning models by introducing malicious input data, modifying existing data, or exploiting weaknesses in the algorithms used to train the models. These attacks can be targeted, deliberate, and sophisticated, leading to serious consequences such as incorrect decision-making, data breaches, and loss of intellectual property. Poisoning attacks, evasion attacks, model stealing, and model inversion attacks are some examples of adversarial attacks. At the moment, most researchers are focusing on a defense approach to mitigate these attacks. This approach aims to create a strong defense system that can detect and respond to attacks in real-time, prevent unauthorized access to systems and data, and mitigate the impact of security breaches. Unfortunately, this approach has some disadvantages, one of which is limited effectiveness. Despite the use of multiple defense measures, determined attackers can still find ways to breach systems and access sensitive data. This is due to the nature of the defense approach, which never addresses the root of the problem and thus can lead to the repetition of such attacks. In this paper, a new approach is proposed, namely using the forensic approach. The proposed approach will investigate attacks against machine learning, identify the root cause of the attack, determine the extent of the damage, and gather information that can be used to prevent similar incidents in the future. © 2024 IEEE.
|