【Datawhale 大模型基础】第三章 大型语言模型的有害性(危害)

第三章 大型语言模型的有害性(危害)

As illustrated aforementioned, LLMs have unique abilities that present only when the model have huge parameters. However, there are also some harms in LLMs.

When considering any technology, we must carefully weigh its benefits and harms. This is a complex task for three reasons:

  1. Benefits and harms are difficult to quantify;
  2. Even if they could be quantified, the distribution of these benefits and harms among the population is not uniform (marginalized groups often bear more harm), making the balancing act between them a thorny ethical issue;
  3. Even if you can make meaningful trade-offs, what authority do decision-makers have to make decisions?

Preventing of LLMs' harmfulness is still a very new research direction. The current content focuses mainly on the following two points:

  1. Harm related to performance differences: For specific tasks (such as question answering), performance differences mean that the model performs better in some groups and worse in others.
  2. Harm related to social biases and stereotypes: Social bias is the systematic association of a concept (such as science) with certain groups (such as men) over others (such as women). Stereotypes are a specific and widely held form of social bias in which the associations are widely held, oversimplified, and generally fixed.

Due to the opacity of pre-training datasets for LLMs and their inclusion of web-crawled data, it is likely that they contain online discussions encompassing political topics (e.g., climate change, abortion, gun control), hate speech, discrimination, and other forms of media bias. Some researchers have identified misogyny, pornography, and other harmful stereotypes within these pre-training datasets. Similarly, researchers **have observed that LLMs exhibit political biases that exacerbate the existing polarization in the pre-training corpora, thereby perpetuating societal biases in the prediction of hate speech and the detection of misinformation.

Recent studies have delved into the potential sources of biases in LLMs (such as training data or model specifications), the ethical concerns associated with deploying biased LLMs in diverse applications, and the current methods for mitigating these biases. An interesting find is that all models exhibit systematic preferences for stereotype data, showing that there is an eager need to establish a high-quality pre-training database.

Toxicity and disinformation are two key harms that all the researchers concern. In the context of toxicity and disinformation, LLMs can be served as two purposes:

  1. They can be used to generate toxic content, which malicious actors can exploit to amplify their information dissemination;
  2. They can be used to detect disinformation, thereby aiding in content moderation.

The challenge of identifying toxicity lies in the ambiguity of labeling, where the output may be toxic in one context but not in others, and different individuals may have varying perceptions of toxicity. Jigsaw, a division of Google, focuses on using technology to address societal issues, such as extremism. In 2017, they developed a widely popular proprietary service called Perspective, which is a machine learning model that assigns a toxicity score between 0 and 1 to each input. This model was trained on discussion pages on Wikipedia (where volunteer moderators discuss editing decisions) and labeled by crowdworkers. And the website is: https://perspectiveapi.com/.

For disinformation, it is the deliberate presentation of false or misleading information to deceive a specific audience, often with an adversarial intent. Another similar noun is misinformation (can be considered as "hallucinations"), which refers to information that is misleadingly presented as true. It is important to note that misleading and false information is not always verifiable; at times, it may raise doubts or shift the burden of proof onto the audience.

A recent research hotspot is hallucinations. To differentiate between various types of hallucinations, the given source content of the model can be analyzed, such as the prompt, potentially containing examples or retrieved context. There are two types of hallucinations: intrinsic and extrinsic hallucinations. In the former, the generated text logically contradicts the source content. In the latter, users are unable to verify the accuracy of the output based on the provided source; the source content lacks sufficient information to evaluate the output, making it undetermined. Extrinsic hallucination is not necessarily erroneous, as it simply means the model produced an output that cannot be supported or refuted by the source content. However, this is still somewhat undesirable as the provided information cannot be verified.

To better compare the difference between them, I cite a figure from a survey:

p.s. Recently I find some insteresting paper that discuss abilities about LLMs, maybe I will make notes in Chinese after finishing datawhale study.

END

相关推荐
美狐美颜sdk1 分钟前
实时美颜滤镜卡顿怎么办?美颜sdk滤镜特效开发优化方案
人工智能·深度学习·计算机视觉·音视频·美颜sdk·视频美颜sdk·美狐美颜sdk
Data_Journal3 分钟前
如何将网站数据抓取到 Excel:一步步指南
大数据·开发语言·数据库·人工智能·php
HelloWorld1024!3 分钟前
Pytorch1 PyTorch 官方 QuickStart 超详细笔记|
人工智能·pytorch·笔记
小程故事多_804 分钟前
OpenClaw 实战|多 Agent 打通小红书:数据收集 + 笔记编写 + 自动发布一步到位
人工智能·笔记·aigc
Olafur_zbj5 分钟前
【AI】深度解析OpenClaw智能体循环(Agentic Loop):底层运行机制、ReAct演进与多智能体协同架构
人工智能·react.js·架构·agent·openclaw
wuxinyan1236 分钟前
Java面试题42:一文深入了解AI Coding 工具
java·人工智能·面试题·ai coding
Dxy12393102166 分钟前
PyTorch训练的艺术:精通ReduceLROnPlateau学习率调度器
人工智能·pytorch·学习
IT 行者8 分钟前
每天了解几个MCP SERVER:Atlan 数据目录平台
人工智能·mcp
w_t_y_y10 分钟前
工具Cursor(六)Rules&Skill&Commands&subAgents对比
服务器·人工智能
所谓伊人,在水一方33310 分钟前
【Python数据科学实战之路】第10章 | 机器学习基础:从理论到实践的完整入门
开发语言·人工智能·python·机器学习·matplotlib