[FL]Adversarial Machine Learning (1)

Reading Alleviates Anxiety [Simba的阅读障碍治疗计划:#1]

Reading Notes for (NIST AI 100-2e2023)[https://csrc.nist.gov/pubs/ai/100/2/e2023/final\].

Adversarial Machine Learning

A Taxonomy and Terminology of Attacks and Mitigations

Section 1: Introduction

  • There are two broad classes of AI systems, based on their capabilities: Predictive AI (PredAI) and Generative AI (GenAI).

  • However, despite the signifcant progress that AI and machine learning (ML) have made in a number of different application domains, these technologies are also vulnerable to attacks that can cause spectacular failures with dire consequences.

  • Unlike cryptography, there are no information-theoretic security proofs for the widely used machine learning algorithms. Moreover, information-theoretic impossibility results have started to appear in the literature [102, 116] that set limits on the effectiveness of widely-used mitigation techniques. As a result, many of the advances in developing mitigations against different classes of attacks tend to be empirical and limited in nature.

Section 2: Pridictive AI Taxonomy

  • The attacker's objectives are shown as disjointed circles with the attacker's goal at the center of each circle: Availability breakdown , Integrity violations , and Privacy compromise.
  • Machine learning involves a TRAINING STAGE, in which a model is learned, and a DEPLOYMENT STAGE, in which the model is deployed on new, unlabeled data samples to generate predictions.

  • Adversarial machine learning literature predominantly considers adversarial attacks against AI systems that could occur at either the training stage or the ML deployment stage. During the ML training stage, the attacker might control part of the training data, their labels, the model parameters, or the code of ML algorithms, resulting in different types of poisoning attacks. During the ML deployment stage, the ML model is already trained, and the adversary could mount evasion attacks to create integrity violations and change the ML model's predictions, as well as privacy attacks to infer sensitive information about the training data or the ML model.

  • Training-time attacks. Poisoning Attack. Data poisoning attacks are applicable to all learning paradigms, while model poisoning attacks are most prevalent in federated learning, where clients send local model updates to the aggregating server, and in supply-chain attacks where malicious code may be added to the model by suppliers of model technology.

  • Deployment-time attack. Adversarial Example.

  • Attacker Goals and Objectives. Availability breakdown, Integrity violations, and Privacy compromise.

  • Privacy Compromise. Attackers might be interested in learning information about the training data (resulting in DATA PRIVACY attacks) or about the ML model (resulting in MODEL PRIVACY attacks). The attacker could have different objectives for compromising the privacy of training data, such as DATA RECONSTRUCTION (inferring content or features of training data), MEMBERSHIP-INFERENCE ATTACKS (inferring the presence of data in the training set), data EXTRACTION (ability to extract training data from generative models), and PROPERTY INFERENCE (inferring properties about the training data distribution). MODEL EXTRACTION is a model privacy attack in which attackers aim to extract information about the model.

  • Attacker Capabilities. Training Data Control. Model Control. Testing Data Control. Label Limit. Source Code Control. Query Access.

  • Attacker Knowledge. White-box attacks. These assume that the attacker operates with full knowledge about the ML system, including the training data, model architecture, and model hyper-parameters. Black-box attacks. These attacks assume minimal knowledge about the ML system. An adversary might get query access to the model, but they have no other information about how the model is trained. Gray-box attacks. There are a range of gray-box attacks that capture adversarial knowledge between black-box and white-box attacks. Suciu et al. introduced a framework to classify gray-box attacks. An attacker might know the model architecture but not its parameters, or the attacker might know the model and its parameters but not the training data.

  • Data Modality: Image. Text. Audio. Video. Cybersecurity. Tabular Data.

  • Recently, the use of ML models trained on multimodal data has gained traction, particularly the combination of image and text data modalities. Several papers have shown that multimodal models may provide some resilience against attacks, but other papers show that multimodal models themselves could be vulnerable to attacks mounted on all modalities at the same time.

  • An interesting open challenge is to test and characterize the resilience of a variety of multimodal ML against evasion, poisoning, and privacy attacks.

  • Evasion Attacks and Mitigations. Methods for creating adversarial examples in black-box settings include zeroth-order optimization, discrete optimization, and Bayesian optimization, as well as transferability, which involves the white-box generation of adversarial examples on a different model architecture before transferring them to the target model.

  • The most promising directions for mitigating the critical threat of evasion attacks are adversarial training (iteratively generating and inserting adversarial examples with their correct labels at training time); certifed techniques, such as randomized smoothing (evaluating ML predic-

    tion under noise); and formal verifcation techniques [112, 154] (applying formal method techniques to verify the model's output). Nevertheless, these methods come with different limitations, such as decreased accuracy for adversarial training and randomized smoothing, and computational complexity for formal methods. There is an inherent trade-off between robustness and accuracy [297, 302, 343]. Similarly, there are trade-offs between a model's robustness and fairness guarantees.

相关推荐
机器之心9 分钟前
谷歌TPU杀疯了,产能暴涨120%、性能4倍吊打,英伟达还坐得稳吗?
人工智能·openai
币圈菜头25 分钟前
GAEA × REVOX 合作 — 共建「情感 AI + Web3 应用」新生态
人工智能·web3·去中心化·区块链
leafff1231 小时前
深度拆解 Claude 的 Agent 架构:MCP + PTC、Skills 与 Subagents 的三维协同
人工智能·架构
老蒋新思维1 小时前
创客匠人深度洞察:创始人 IP 打造的非线性增长模型 —— 知识变现的下一个十年红利
大数据·网络·人工智能·tcp/ip·重构·数据挖掘·创客匠人
北京耐用通信1 小时前
协议转换的‘魔法转换器’!耐达讯自动化Ethernet/IP转Devicenet如何让工业机器人‘听懂’不同咒语?”
网络·人工智能·科技·网络协议·机器人·自动化·信息与通信
ujainu1 小时前
Flutter + HarmonyOS开发:轻松实现ArkTS页面跳转
人工智能·python·flutter
hans汉斯1 小时前
【人工智能与机器人研究】人工智能算法伦理风险的适应性治理研究——基于浙江实践与欧美经验的整合框架
大数据·人工智能·算法·机器人·数据安全·算法伦理·制度保障
科普瑞传感仪器1 小时前
航空航天制造升级:机器人高精度力控打磨如何赋能复合材料加工?
java·前端·人工智能·机器人·无人机·制造
coder_pig1 小时前
2025 复盘 | 穿越AI焦虑周期,进化为 "AI全栈"
人工智能·aigc·ai编程
初九之潜龙勿用1 小时前
在openEuler操作系统基础上实现机器学习开发以及openEuler优势分析
人工智能·机器学习