Large Language Model (LLM) Tokenizers - bos_token - eos_token - unk_token

Large Language Model {LLM} Tokenizers - bos_token - eos_token - unk_token

  • [1. NVIDIA NeMo Framework](#1. NVIDIA NeMo Framework)
    • [1.1. Tokenizers](#1.1. Tokenizers)
  • [2. PyTorch Module code](#2. PyTorch Module code)
    • [2.1. `torchtune.modules.tokenizers._tiktoken`](#2.1. torchtune.modules.tokenizers._tiktoken)
  • References

1. NVIDIA NeMo Framework

https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html

NVIDIA NeMo Framework is a scalable and cloud-native generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (e.g. Automatic Speech Recognition and Text-to-Speech).

It enables users to efficiently create, customize, and deploy new generative AI models by leveraging existing code and pre-trained model checkpoints.

NeMo Framework provides end-to-end support for developing Large Language Models (LLMs) and Multimodal Models (MMs).

1.1. Tokenizers

复制代码
class nemo.collections.common.tokenizers.AutoTokenizer(
    pretrained_model_name: str,
    vocab_file: str | None = None,
    merges_file: str | None = None,
    mask_token: str | None = None,
    bos_token: str | None = None,
    eos_token: str | None = None,
    pad_token: str | None = None,
    sep_token: str | None = None,
    cls_token: str | None = None,
    unk_token: str | None = None,
    additional_special_tokens: List | None = [],
    use_fast: bool | None = False,
    trust_remote_code: bool | None = False,
)

pretrained_model_name - corresponds to HuggingFace-AutoTokenizer's 'pretrained_model_name_or_path' input argument.

vocab_file - path to file with vocabulary which consists of characters separated by newlines.

mask_token - mask token

bos_token - the beginning of sequence token

eos_token - the end of sequence token. Usually equal to sep_token

pad_token - token to use for padding

sep_token - token used for separating sequences

cls_token - class token. Usually equal to bos_token

unk_token - token to use for unknown tokens

additional_special_tokens - list of other tokens beside standard special tokens (bos, eos, pad, etc.). For example, sentinel tokens for T5 (<extra_id_0>, <extra_id_1>, etc.)

use_fast - whether to use fast HuggingFace tokenizer

2. PyTorch Module code

https://pytorch.org/torchtune/0.1/_modules/index.html

2.1. torchtune.modules.tokenizers._tiktoken

https://pytorch.org/torchtune/0.1/_modules/torchtune/modules/tokenizers/_tiktoken.html

复制代码
        path (str): Path to pretrained tokenizer checkpoint file.
        name (str): Name of the tokenizer (used by tiktoken for identification).
        pattern (str): Regex pattern used to for string parsing.
        all_special_tokens (Optional[List[str]]): List of all special tokens. 
            First element must be bos token, second element must be eos token, final element must be python tag. 
            All elements must be unique. Length must be at most 256. Default: None (will use ALL_SPECIAL_TOKENS)
        bos_token (str): Beginning of sequence token. Defaults to BEGIN_OF_TEXT.
        eos_token (str): End of sequence token. Defaults to END_OF_TEXT.
        start_header_id (str): Start header token. Defaults to START_HEADER_ID.
        end_header_id (str): End header token. Defaults to END_HEADER_ID.
        step_id (str): Step token. Defaults to STEP_ID.
        eom_id (str): End of message token. Defaults to EOM_ID.
        eot_id (str): End of turn token. Defaults to EOT_ID.
        python_tag (str): Python tag token. Defaults to PYTHON_TAG.

References

1\] Yongqiang Cheng, \[2\] How do LLMs process text data - A deep dive into Tokenization (Part-1),

相关推荐
带娃的IT创业者4 小时前
Prompt Engineering 进阶:让 AI 写出人类味道(完整指南)
人工智能·大模型·llm·prompt·写作技巧·ai 教学
CHPCWWHSU4 小时前
初识llama.cpp - 轻量级推理引擎
llm·llama·cpp·cudatoolkit
树獭非懒5 小时前
AI大模型小白手册 | RAG技术与应用
人工智能·llm
吴佳浩7 小时前
大模型垂直领域微调系列(一):认识微调
人工智能·llm
AndrewHZ9 小时前
【大模型通关指南】2. 大模型发展时间线:从GPT-1到当前主流模型的演进逻辑
人工智能·gpt·语言模型·大模型·llm·主流模型
一个处女座的程序猿10 小时前
MLMs之Agent之Qwen:Qwen3.5的简介、安装和使用方法、案例应用之详细攻略
人工智能·llm·mlm·qwen3.5
JEECG官方13 小时前
Mac Studio M4 通过 vLLM 部署本地大模型,对接 Jeecg-AI
低代码·llm·aigc
大鹏的NLP博客14 小时前
LLM 在数据分析中的计算能力边界与正确用法
人工智能·数据分析·llm
阿拉斯攀登16 小时前
Transformer 架构拆解:Encoder 与 Decoder 的秘密
人工智能·深度学习·架构·大模型·llm·transformer
WitsMakeMen16 小时前
rankerMixer为什么能提升系统的MFU
语言模型·llm·mfu