【Datawhale 大模型基础】第三章 大型语言模型的有害性(危害)

第三章 大型语言模型的有害性(危害)

As illustrated aforementioned, LLMs have unique abilities that present only when the model have huge parameters. However, there are also some harms in LLMs.

When considering any technology, we must carefully weigh its benefits and harms. This is a complex task for three reasons:

  1. Benefits and harms are difficult to quantify;
  2. Even if they could be quantified, the distribution of these benefits and harms among the population is not uniform (marginalized groups often bear more harm), making the balancing act between them a thorny ethical issue;
  3. Even if you can make meaningful trade-offs, what authority do decision-makers have to make decisions?

Preventing of LLMs' harmfulness is still a very new research direction. The current content focuses mainly on the following two points:

  1. Harm related to performance differences: For specific tasks (such as question answering), performance differences mean that the model performs better in some groups and worse in others.
  2. Harm related to social biases and stereotypes: Social bias is the systematic association of a concept (such as science) with certain groups (such as men) over others (such as women). Stereotypes are a specific and widely held form of social bias in which the associations are widely held, oversimplified, and generally fixed.

Due to the opacity of pre-training datasets for LLMs and their inclusion of web-crawled data, it is likely that they contain online discussions encompassing political topics (e.g., climate change, abortion, gun control), hate speech, discrimination, and other forms of media bias. Some researchers have identified misogyny, pornography, and other harmful stereotypes within these pre-training datasets. Similarly, researchers **have observed that LLMs exhibit political biases that exacerbate the existing polarization in the pre-training corpora, thereby perpetuating societal biases in the prediction of hate speech and the detection of misinformation.

Recent studies have delved into the potential sources of biases in LLMs (such as training data or model specifications), the ethical concerns associated with deploying biased LLMs in diverse applications, and the current methods for mitigating these biases. An interesting find is that all models exhibit systematic preferences for stereotype data, showing that there is an eager need to establish a high-quality pre-training database.

Toxicity and disinformation are two key harms that all the researchers concern. In the context of toxicity and disinformation, LLMs can be served as two purposes:

  1. They can be used to generate toxic content, which malicious actors can exploit to amplify their information dissemination;
  2. They can be used to detect disinformation, thereby aiding in content moderation.

The challenge of identifying toxicity lies in the ambiguity of labeling, where the output may be toxic in one context but not in others, and different individuals may have varying perceptions of toxicity. Jigsaw, a division of Google, focuses on using technology to address societal issues, such as extremism. In 2017, they developed a widely popular proprietary service called Perspective, which is a machine learning model that assigns a toxicity score between 0 and 1 to each input. This model was trained on discussion pages on Wikipedia (where volunteer moderators discuss editing decisions) and labeled by crowdworkers. And the website is: https://perspectiveapi.com/.

For disinformation, it is the deliberate presentation of false or misleading information to deceive a specific audience, often with an adversarial intent. Another similar noun is misinformation (can be considered as "hallucinations"), which refers to information that is misleadingly presented as true. It is important to note that misleading and false information is not always verifiable; at times, it may raise doubts or shift the burden of proof onto the audience.

A recent research hotspot is hallucinations. To differentiate between various types of hallucinations, the given source content of the model can be analyzed, such as the prompt, potentially containing examples or retrieved context. There are two types of hallucinations: intrinsic and extrinsic hallucinations. In the former, the generated text logically contradicts the source content. In the latter, users are unable to verify the accuracy of the output based on the provided source; the source content lacks sufficient information to evaluate the output, making it undetermined. Extrinsic hallucination is not necessarily erroneous, as it simply means the model produced an output that cannot be supported or refuted by the source content. However, this is still somewhat undesirable as the provided information cannot be verified.

To better compare the difference between them, I cite a figure from a survey:

p.s. Recently I find some insteresting paper that discuss abilities about LLMs, maybe I will make notes in Chinese after finishing datawhale study.

END

相关推荐
jndingxin3 分钟前
OpenCV 图形API(63)图像结构分析和形状描述符------计算图像中非零像素的边界框函数boundingRect()
人工智能·opencv·计算机视觉
旧故新长8 分钟前
支持Function Call的本地ollama模型对比评测-》开发代理agent
人工智能·深度学习·机器学习
微学AI21 分钟前
融合注意力机制和BiGRU的电力领域发电量预测项目研究,并给出相关代码
人工智能·深度学习·自然语言处理·注意力机制·bigru
知来者逆32 分钟前
计算机视觉——速度与精度的完美结合的实时目标检测算法RF-DETR详解
图像处理·人工智能·深度学习·算法·目标检测·计算机视觉·rf-detr
一勺汤35 分钟前
YOLOv11改进-双Backbone架构:利用双backbone提高yolo11目标检测的精度
人工智能·yolo·双backbone·double backbone·yolo11 backbone·yolo 双backbone
武汉唯众智创37 分钟前
高职人工智能技术应用专业(计算机视觉方向)实训室解决方案
人工智能·计算机视觉·人工智能实训室·计算机视觉实训室·人工智能计算机视觉实训室
Johny_Zhao1 小时前
MySQL 高可用集群搭建部署
linux·人工智能·mysql·信息安全·云计算·shell·yum源·系统运维·itsm
一只可爱的小猴子1 小时前
2022李宏毅老师机器学习课程笔记
人工智能·笔记·机器学习
地瓜机器人1 小时前
乐聚机器人与地瓜机器人达成战略合作,联合发布Aelos Embodied具身智能
人工智能·机器人
带娃的IT创业者1 小时前
《AI大模型趣味实战》基于RAG向量数据库的知识库AI问答助手设计与实现
数据库·人工智能