An Anti-Poisoning Attack Method for Distributed AI System

Xin, Xuezhu and Bai, Yang and Wang, Haixin and Mou, Yunzhen and Tan, Jian (2021) An Anti-Poisoning Attack Method for Distributed AI System. Journal of Computer and Communications, 09 (12). pp. 99-105. ISSN 2327-5219

[thumbnail of jcc_2021123115320790.pdf] Text
jcc_2021123115320790.pdf - Accepted Version

Download (222kB)

Abstract

In distributed AI system, the models trained on data from potentially unreliable sources can be attacked by manipulating the training data distribution by inserting carefully crafted samples into the training set, which is known as Data Poisoning. Poisoning will to change the model behavior and reduce model performance. This paper proposes an algorithm that gives an improvement of both efficiency and security for data poisoning in a distributed AI system. The past methods of active defense often have a large number of invalid checks, which slows down the operation efficiency of the whole system. While passive defense also has problems of missing data and slow detection of error source. The proposed algorithm establishes the suspect hypothesis level to test and extend the verification of data packets and estimates the risk of terminal data. It can enhance the health degree of a distributed AI system by preventing the occurrence of poisoning attack and ensuring the efficiency and safety of the system operation.

Item Type: Article
Subjects: Academic Digital Library > Computer Science
Depositing User: Unnamed user with email info@academicdigitallibrary.org
Date Deposited: 09 May 2023 05:26
Last Modified: 01 Feb 2024 04:16
URI: http://publications.article4sub.com/id/eprint/1465

Actions (login required)

View Item
View Item