【论文阅读】DaST: Data-free Substitute Training for Adversarial Attacks(2020)

摘要

Machine learning models(机器学习模型) are vulnerable(容易受到) to adversarial examples(对抗样本). For the black-box setting(对于黑盒设置), current substitute attacks(目前替代攻击) need pre-trained models(预训练模型) to generate adversarial examples(生成对抗样本). However, pre-trained models(预训练模型) are hard to obtain(很难获得) in real-world tasks(在实际任务中). In this paper, we propose a data-free substitute training method(无数据替代训练方法) (DaST) to obtain substitute models(获得替代模型) for adversarial black-box attacks(对抗性黑盒攻击) without the requirement of any real data(不需要任何真实数据). To achieve this, DaST utilizes specially designed(专门设计) generative adversarial networks(生成对抗网络) (GANs) to train the substitute models. In particular(特别地), we design a multi-branch architecture(多分支架构) and label-control loss(标签控制损失) for the generative model to deal with(处理) the uneven distribution(不均匀分布) of synthetic samples(生成样本). The substitute model is then trained by the synthetic samples(合成样本) generated by the generative model(生成模型), which are labeled by the attacked model subsequently(随后). The experiments demonstrate(实验表明) the substitute models produced by DaST can achieve competitive performance(达到有竞争力的性能) compared with the baseline models(基线模型) which are trained by the same train set(相同的训练集) with attacked models(攻击模型). Additionally(此外), to evaluate the practicability(评估实用性) of the proposed method(所提出的方法) on the real-world task(在现实世界任务), we attack an online machine learning model(在线机器学习模型) on the Microsoft Azure platform. The remote model(远程模型) misclassifies(错误分类) 98.35% of the adversarial examples crafted(制作) by our method. To the best of our knowledge(据我们所知), we are the first(第一个) to train a substitute model for adversarial attacks(对抗样本) without any real data(没有任何真实数据).

方法

总结

We have presented(提出) a data-free method DaST to train substitute models(替代模型) for adversarial attacks(对抗性攻击). DaST reduces(降低) the prerequisites(先决条件) of adversarial substitute attacks(对抗性攻击) by utilizing(利用) GANs to generate synthetic samples(生成合成样本). This is the first method that can train substitute models without the requirement of any real data(不需要任何真实数据). The experiments showed(实验表明) the effectiveness(有效性) of our method. It presented(表明) that machine learning systems have significant risks(存在重大风险), attackers can train substitute models even when the real input data is hard to collect(即使难以手机真实的输入数据).

The proposed DaST cannot generate adversarial examples alone(不能单独生成对抗性样本), it should be used with other gradient-based attack methods(应该与其他基于梯度的攻击方法一起使用). In future work, we will design a new substitute training method, which can generate attacks directly(直接). Furthermore, we will explore(探索) the defense for DaST.

论文链接

DaST: Data-free Substitute Training for Adversarial Attacks

相关推荐
STLearner3 小时前
AI论文速读 | U-Cast:学习高维时间序列预测的层次结构
大数据·论文阅读·人工智能·深度学习·学习·机器学习·数据挖掘
youcans_8 小时前
【DeepSeek 论文精读】15. DeepSeek-V3.2:开拓开源大型语言模型新前沿
论文阅读·人工智能·语言模型·智能体·deepseek
m0_6501082410 小时前
Co-MTP:面向自动驾驶的多时间融合协同轨迹预测框架
论文阅读·人工智能·自动驾驶·双时间域融合·突破单车感知局限·帧间轨迹预测·异构图transformer
胆怯的ai萌新13 小时前
论文阅读《Audit Games with Multiple Defender Resources》
论文阅读
墨绿色的摆渡人14 小时前
论文笔记(一百零六)RynnVLA-002: A Unified Vision-Language-Action and World Model
论文阅读
提娜米苏14 小时前
[论文笔记] ASR is all you need: Cross-modal distillation for lip reading (2020)
论文阅读·深度学习·计算机视觉·语音识别·知识蒸馏·唇语识别
小殊小殊15 小时前
重磅!DeepSeek发布V3.2系列模型!
论文阅读·人工智能·算法
youcans_17 小时前
【youcans论文精读】U-Net:用于医学图像分割的 U型卷积神经网络
论文阅读·人工智能·计算机视觉·图像分割·unet
youcans_1 天前
【youcans论文精读】VM-UNet:面向医学图像分割的视觉 Mamba UNet 架构
论文阅读·人工智能·计算机视觉·图像分割·状态空间模型
DuHz1 天前
论文阅读——Edge Impulse:面向微型机器学习的MLOps平台
论文阅读·人工智能·物联网·算法·机器学习·edge·边缘计算