Prompt Engineering

https://lilianweng.github.io/posts/2023-03-15-prompt-engineering/

Few-shot: 有例子,效果好;增大了context长度,执行变慢;

可能的问题:1. Label不均衡造成生成label的bias;2.最后一个shot的label容易被输出;3. 常见词比生僻词更容易被输出;

few-shot example选择:

  1. 当前question,去example库里搜索语义相似的examples,作为few-shot;

  2. 用算法找出彼此最diversity的作为few-shot;

few-shot example顺序:

  1. 足够随机;(也可每次推理用不同的order)

Instruct LM:

Instructed LM (e.g. InstructGPT, natural instruction) finetunes a pretrained model with high-quality tuples of (task instruction, input, ground truth output) to make LM better understand user intention and follow instruction .
RLHF (Reinforcement Learning from Human Feedback) is a common method to do so.
trying to be specific and precise and avoiding say "not do something" but rather specify what to do.

写上受众人群:给6岁小孩....;工作中用的...;

Sampling:

用大一些的temperature,多采样几次;用majority vote来决定用谁(分类任务的类别使用;生成句子,两次不太可能重合,不太适用);(输出代码,且有现成的test case,那种,可以多retry几次看哪次判对了)

CoT(Chain of Thought)("新想"、"Thought"也是这类):

适合复杂任务;适合50B以上的大模型;

Few-shot COT:

Question: Tom and Elizabeth have a competition to climb a hill. Elizabeth takes 30 minutes to climb the hill. Tom takes four times as long as Elizabeth does to climb the hill. How many hours does it take Tom to climb up the hill?

Answer: It takes Tom 30*4 = <<30*4=120>>120 minutes to climb the hill. It takes Tom 120/60 = <<120/60=2>>2 hours to climb the hill. So the answer is 2.

===

Question: Jack is a soccer player. He needs to buy two pairs of socks and a pair of soccer shoes. Each pair of socks cost 9.50, and the shoes cost 92. Jack has $40. How much more money does Jack need?

Answer: The total cost of two pairs of socks is 9.50 x 2 = <<9.5*2=19>>19. The total cost of the socks and the shoes is 19 + 92 = \<\<19+92=111\>\>111. Jack need 111 - 40 = <<111-40=71>>71 more. So the answer is 71.

===

Question: Marty has 100 centimeters of ribbon that he must cut into 4 equal parts. Each of the cut parts must be divided into 5 equal parts. How long will each final cut be?

Answer:

0-shot COT:

Question: Marty has 100 centimeters of ribbon that he must cut into 4 equal parts. Each of the cut parts must be divided into 5 equal parts. How long will each final cut be?

Answer: Let's think step by step.

结论:

  1. 多采样几次,majority vote,能提升效果;

  2. 改变few-shot example的顺序,或改变example选择,增加了随机性,多次采样,能提升效果;

  3. 只有answer没有推理过程的训练数据,如果用LLM自动生成一些推理过程,并保留那些同时生成了正确answer的,将这些<question, 推理过程, answer>做训练,能提升效果;(如果训练数据不带answer,权宜之计是用多次采样+majority vote来生成近似answer)

  4. 推理链每步之间的分隔符也有trick:

When separating reasoning steps, newline \n symbol works better than step i, period . or semicolon ;

  1. 只在top-k个最复杂(步数最多)的sampling输出上,majority-vote得出答案,效果好;

  2. 复杂的推理样例few-shot,在复杂question上能提升效果,在简单question上反而伤害效果;

  3. "Question:"比"Q:"更有效

  4. 自问自答:Prompt-->生成问题1-->生成Answer1-->生成问题2-->...-->生成FinalAnswer而不是问题;(这些生成的问题,可以用大模型回答,也可以搜索知识库、搜索引擎等Tools)

  1. Tree of Throught

该层的所有节点,放到input-prompt里面,让大模型给出CoT和答案(选谁)。多采样几次,选中最多的那个节点,优先展开。

Automatic Prompt Engineer:

  1. 把少量example放到prompt里,让大模型输出instruction:

{{Given desired input-output pairs}}\n\nThe instruction is

  1. 对这些种子instruction,进行迭代更新:

Generate a variation of the following instruction while keeping the semantic meaning.\n\nInput: ...\n\nOutput:...

  1. 评价每个自动生成的instruction的好坏,用其在validation-set上的生成结果分数之和,来打分;

生成带CoT推理过程的Few-shot: (我觉得,人工挑选最有代表性的Cot-Examples,最好)

  1. 将可用的example聚类,每个聚类选择1个离中心最近的(最优代表性的);

  2. 每个sample用0-shot CoT来生成推理过程;

开卷QA:

借助搜索引擎;(每个doc按6个句子来split;匹配算法用的TF-IDF)

即使搜到了近年新东西加入prompt里,但效果,还是不如关于老东西的question答的好;原因:新东西和模型的记忆有冲突;

闭卷QA:

回答Question之前,先自己问自己回答该Question需要什么知识,然后再把知识加入prompt;据说效果能变好;

LLM生成程序-->执行程序-->执行结果当作答案;

相关推荐
这张生成的图像能检测吗2 小时前
(论文速读)基于图像堆栈的低频超宽带SAR叶簇隐蔽目标变化检测
图像处理·人工智能·深度学习·机器学习·信号处理·雷达·变化检测
2401_841495642 小时前
【自然语言处理】生成式语言模型GPT复现详细技术方案
人工智能·python·gpt·深度学习·语言模型·自然语言处理·transformer
HelloRevit4 小时前
机器学习、深度学习、大模型 是什么关系?
人工智能·深度学习·机器学习
woshihonghonga4 小时前
Dropout提升模型泛化能力【动手学深度学习:PyTorch版 4.6 暂退法】
人工智能·pytorch·python·深度学习·机器学习
java1234_小锋4 小时前
PyTorch2 Python深度学习 - 循环神经网络(RNN)实例
python·rnn·深度学习·pytorch2
Danceful_YJ5 小时前
28. 门控循环单元(GRU)的实现
pytorch·python·深度学习
机器学习ing.5 小时前
Vision Transformer(ViT)保姆级教程:从原理到CIFAR-10实战(PyTorch)!
人工智能·深度学习·机器学习
NON-JUDGMENTAL6 小时前
指令微调(Instruction Tuning)
人工智能·深度学习·机器学习
哥布林学者7 小时前
吴恩达深度学习课程二: 改善深层神经网络 第二周:优化算法(一)Mini-batch 梯度下降
深度学习·ai
AI浩7 小时前
【Block总结】MEEM,多尺度边缘增强模块|即插即用|ACM 2024
人工智能·深度学习