大模型安全相关论文

LLM对于安全的优势

"Generating secure hardware using chatgpt resistant to cwes," Cryptology ePrint Archive, Paper 2023/212, 2023评估了ChatGPT平台上代码生成过程的安全性,特别是在硬件领域。探索了设计者可以采用的策略,使ChatGPT能够提供安全的硬件代码生成

"Fixing hardware security bugs with large language models," arXiv preprint arXiv:2302.01215, 2023. 将关注点转移到硬件安全上。研究了LLMs,特别是OpenAI的Codex,在自动识别和修复硬件设计中与安全相关的bug方面的使用。

"Novel approach to cryptography implementation using chatgpt," 使用ChatGPT实现密码学,最终保护数据机密性。尽管缺乏广泛的编码技巧或编程知识,但作者能够通过ChatGPT成功地实现密码算法。这凸显了个体利用ChatGPT进行密码学任务的潜力。

"Agentsca: Advanced physical side channel analysis agent with llms." 2023.探索了应用LLM技术来开发侧信道分析方法。该研究包括3种不同的方法:提示工程、微调LLM和基于人类反馈强化学习的微调LLM

LLM的隐私保护

通过最先进的隐私增强技术(例如,零知识证明 ,差分隐私[ 233,175,159 ]和联邦学习[ 140,117,77 ] )来增强LLM

  • "Privacy and data protection in chatgpt and other ai chatbots: Strategies for securing user information,"
  • "Differentially private decoding in large language models,"
  • "Privacy-preserving prompt tuning for large language model services,"
  • "Federatedscope-llm: A comprehensive package for fine-tuning large language models in federated learning,"
  • "Chatgpt passing usmle shines a spotlight on the flaws of medical education,"
  • "Fate-llm: A industrial grade federated learning framework for large language models,"

对LLM的攻击

侧信道攻击

"Privacy side channels in machine learning systems,"引入了隐私侧信道攻击,这是一种利用系统级组件(例如,数据过滤、输出监控等)以远高于单机模型所能实现的速度提取隐私信息的攻击。提出了覆盖整个ML生命周期的4类侧信道,实现了增强型成员推断攻击和新型威胁(例如,提取用户的测试查询)

数据中毒攻击

  • "Universal jailbreak backdoors from poisoned human feedback,"
  • "On the exploitability of instruction tuning,"
  • "Promptspecific poisoning attacks on text-to-image generative models,"
  • "Poisoning language models during instruction tuning,"

后门攻击

  • "Chatgpt as an attack tool: Stealthy textual backdoor attack via blackbox generative model trigger,"
  • "Large language models are better adversaries: Exploring generative clean-label backdoor attacks against text classifiers,"
  • "Poisonprompt: Backdoor attack on prompt-based large language models,"

属性推断攻击

  • "Beyond memorization: Violating privacy via inference with large language models,"首次全面考察了预训练的LLMs从文本中推断个人信息的能力

提取训练数据

  • "Ethicist: Targeted training data extraction through loss smoothed soft prompting and calibrated confidence estimation,"
  • "Canary extraction in natural language understanding models,"
  • "What do code models memorize? an empirical study on large language models of code,"
  • "Are large pre-trained language models leaking your personal information?"
  • "Text revealer: Private text reconstruction via model inversion attacks against transformers,"

提取模型

  • "Data-free model extraction,"

对LLM的防御

模型架构防御

  • "Large language models can be strong differentially private learners,"具有较大参数规模的语言模型可以更有效地以差分隐私的方式进行训练。
  • "Promptbench: Towards evaluating the robustness of large language models on adversarial prompts,"
  • "Evaluating the instructionfollowing robustness of large language models to prompt injection,"更广泛的参数规模的LLMs,通常表现出对对抗攻击更高的鲁棒性。
  • "Revisiting out-of-distribution robustness in nlp: Benchmark, analysis, and llms evaluations,"在Out - of - distribution ( OOD )鲁棒性场景中也验证了这一点
  • "Synergistic integration of large language models and cognitive architectures for robust ai: An exploratory analysis,"通过将多种认知架构融入LLM来提高人工智能的鲁棒性。
  • "Building trust in conversational ai: A comprehensive review and solution architecture for explainable, privacy-aware systems using llms and knowledge graph,"与外部模块(知识图谱)相结合来提高LLM的安全性

LLM训练的防御:对抗训练

  • "Adversarial training for large neural language models,"
  • "Improving neural language modeling via adversarial training,"
  • "Freelb: Enhanced adversarial training for natural language understanding,"
  • "Towards improving adversarial training of nlp models,"
  • "Token-aware virtual adversarial training in natural language understanding,"
  • "Towards deep learning models resistant to adversarial attacks,"
  • "Achieving model robustness through discrete adversarial training,"
  • "Towards improving adversarial training of nlp models,"
  • "Improving neural language modeling via adversarial training,"
  • "Adversarial training for large neural language models,"
  • "Freelb: Enhanced adversarial training for natural language understanding,"
  • "Token-aware virtual adversarial training in natural language understanding,"

LLM训练的防御:鲁棒微调

  • "How should pretrained language models be fine-tuned towards adversarial robustness?"
  • "Smart: Robust and efficient fine-tuning for pre-trained natural language models through principled regularized optimization,"
  • "Safety-tuned llamas: Lessons from improving the safety of large language models that follow instructions,"

LLM推理的防御:指令预处理

  • "Baseline defenses for adversarial attacks against aligned language models,"评估了多种针对越狱攻击的基线预处理方法,包括重令牌化和复述。
  • "On the reliability of watermarks for large language models,"评估了多种针对越狱攻击的基线预处理方法,包括重令牌化和复述
  • "Text adversarial purification as defense against adversarial attacks,"通过先对输入令牌进行掩码,然后与其他LLMs一起预测被掩码的令牌来净化指令。
  • "Jailbreak and guard aligned language models with only few in-context demonstrations,"证明了在指令中插入预定义的防御性证明可以有效地防御LLMs的越狱攻击。
  • "Testtime backdoor mitigation for black-box large language models with defensive demonstrations,"证明了在指令中插入预定义的防御性证明可以有效地防御LLMs的越狱攻击。

LLM推理的防御:恶意检测

提供了对LLM中间结果的深度检查,如神经元激活

  • "Defending against backdoor attacks in natural language generation,"提出用后向概率检测后门指令。
  • "A survey on evaluation of large language models,"从掩蔽敏感性的角度区分了正常指令和中毒指令。
  • "Bddr: An effective defense against textual backdoor attacks,"根据可疑词的文本相关性来识别可疑词。
  • "Rmlm: A flexible defense framework for proactively mitigating word-level adversarial attacks,"根据多代之间的语义一致性来检测对抗样本
  • "Shifting attention to relevance: Towards the uncertainty estimation of large language models,"在LLMs的不确定性量化中对此进行了探索
  • "Onion: A simple and effective defense against textual backdoor attacks,"利用了语言统计特性,例如检测孤立词。

LLM推理的防御:生成后处理

  • "Jailbreaker in jail: Moving target defense for large language models,"通过与多个模型候选物比较来减轻生成的毒性。
  • "Llm self defense: By self examination, llms know they are being tricked,"
相关推荐
innutritious22 分钟前
车辆重识别(2020NIPS去噪扩散概率模型)论文阅读2024/9/27
人工智能·深度学习·计算机视觉
橙子小哥的代码世界1 小时前
【深度学习】05-RNN循环神经网络-02- RNN循环神经网络的发展历史与演化趋势/LSTM/GRU/Transformer
人工智能·pytorch·rnn·深度学习·神经网络·lstm·transformer
985小水博一枚呀2 小时前
【深度学习基础模型】神经图灵机(Neural Turing Machines, NTM)详细理解并附实现代码。
人工智能·python·rnn·深度学习·lstm·ntm
SEU-WYL3 小时前
基于深度学习的任务序列中的快速适应
人工智能·深度学习
OCR_wintone4213 小时前
中安未来 OCR—— 开启高效驾驶证识别新时代
人工智能·汽车·ocr
matlabgoodboy4 小时前
“图像识别技术:重塑生活与工作的未来”
大数据·人工智能·生活
云卓科技4 小时前
无人机之数据提取篇
科技·安全·机器人·无人机·制造
最近好楠啊4 小时前
Pytorch实现RNN实验
人工智能·pytorch·rnn
OCR_wintone4214 小时前
中安未来 OCR—— 开启文字识别新时代
人工智能·深度学习·ocr
学步_技术4 小时前
自动驾驶系列—全面解析自动驾驶线控制动技术:智能驾驶的关键执行器
人工智能·机器学习·自动驾驶·线控系统·制动系统