Large Language Model (LLM) Tokenizers - bos_token - eos_token - unk_token

Large Language Model {LLM} Tokenizers - bos_token - eos_token - unk_token

  • [1. NVIDIA NeMo Framework](#1. NVIDIA NeMo Framework)
    • [1.1. Tokenizers](#1.1. Tokenizers)
  • [2. PyTorch Module code](#2. PyTorch Module code)
    • [2.1. `torchtune.modules.tokenizers._tiktoken`](#2.1. torchtune.modules.tokenizers._tiktoken)
  • References

1. NVIDIA NeMo Framework

https://docs.nvidia.com/nemo-framework/user-guide/latest/overview.html

NVIDIA NeMo Framework is a scalable and cloud-native generative AI framework built for researchers and developers working on Large Language Models, Multimodal, and Speech AI (e.g. Automatic Speech Recognition and Text-to-Speech).

It enables users to efficiently create, customize, and deploy new generative AI models by leveraging existing code and pre-trained model checkpoints.

NeMo Framework provides end-to-end support for developing Large Language Models (LLMs) and Multimodal Models (MMs).

1.1. Tokenizers

复制代码
class nemo.collections.common.tokenizers.AutoTokenizer(
    pretrained_model_name: str,
    vocab_file: str | None = None,
    merges_file: str | None = None,
    mask_token: str | None = None,
    bos_token: str | None = None,
    eos_token: str | None = None,
    pad_token: str | None = None,
    sep_token: str | None = None,
    cls_token: str | None = None,
    unk_token: str | None = None,
    additional_special_tokens: List | None = [],
    use_fast: bool | None = False,
    trust_remote_code: bool | None = False,
)

pretrained_model_name - corresponds to HuggingFace-AutoTokenizer's 'pretrained_model_name_or_path' input argument.

vocab_file - path to file with vocabulary which consists of characters separated by newlines.

mask_token - mask token

bos_token - the beginning of sequence token

eos_token - the end of sequence token. Usually equal to sep_token

pad_token - token to use for padding

sep_token - token used for separating sequences

cls_token - class token. Usually equal to bos_token

unk_token - token to use for unknown tokens

additional_special_tokens - list of other tokens beside standard special tokens (bos, eos, pad, etc.). For example, sentinel tokens for T5 (<extra_id_0>, <extra_id_1>, etc.)

use_fast - whether to use fast HuggingFace tokenizer

2. PyTorch Module code

https://pytorch.org/torchtune/0.1/_modules/index.html

2.1. torchtune.modules.tokenizers._tiktoken

https://pytorch.org/torchtune/0.1/_modules/torchtune/modules/tokenizers/_tiktoken.html

复制代码
        path (str): Path to pretrained tokenizer checkpoint file.
        name (str): Name of the tokenizer (used by tiktoken for identification).
        pattern (str): Regex pattern used to for string parsing.
        all_special_tokens (Optional[List[str]]): List of all special tokens. 
            First element must be bos token, second element must be eos token, final element must be python tag. 
            All elements must be unique. Length must be at most 256. Default: None (will use ALL_SPECIAL_TOKENS)
        bos_token (str): Beginning of sequence token. Defaults to BEGIN_OF_TEXT.
        eos_token (str): End of sequence token. Defaults to END_OF_TEXT.
        start_header_id (str): Start header token. Defaults to START_HEADER_ID.
        end_header_id (str): End header token. Defaults to END_HEADER_ID.
        step_id (str): Step token. Defaults to STEP_ID.
        eom_id (str): End of message token. Defaults to EOM_ID.
        eot_id (str): End of turn token. Defaults to EOT_ID.
        python_tag (str): Python tag token. Defaults to PYTHON_TAG.

References

1\] Yongqiang Cheng, \[2\] How do LLMs process text data - A deep dive into Tokenization (Part-1),

相关推荐
怪我冷i12 小时前
多租户管理系统,用户表,IsSuperAdmin,IsTenantAdmin,IsCompanyAdmin,IsDeptAdmin需要吗?
golang·llm·多租户·skill
测试员周周13 小时前
【AI测试系统】第2篇:拒绝盲目 AI:规则引擎 10ms 自动生成 36 条测试用例实战(附源码)
llm·ai编程·测试
冬奇Lab13 小时前
RAG 系列(三):调对这 4 个参数,让你的 RAG 从「能用」变「好用」
人工智能·llm
数据智能老司机13 小时前
人人都能学会的提示词工程——人人都能学会的提示词工程
llm
数据智能老司机13 小时前
人人都能学会的提示词工程——提示素养:从习惯到精通
llm
Lw老王要学习14 小时前
本地部署OpenClaw + WSL Ubuntu + 千问云+QQ+微信+飞书
ubuntu·llm·agent·openclaw·龙虾
Irissgwe14 小时前
LangChain之核心组件(消息与提示词模板)
人工智能·ai·langchain·llm·langgraph
小锋学长生活大爆炸15 小时前
【开源软件】打造 macOS 纯本地 LLM 工作台 | Sidekick
macos·llm
冬奇Lab1 天前
RAG 系列(二):用 LangChain 搭建你的第一个 RAG Pipeline
人工智能·langchain·llm
薛定谔的猫3691 天前
深度解析:大语言模型 (LLM) Agent 的架构与演进趋势
ai·llm·agent·技术趋势·artificial intelligence