
whisper-cli 实际通常指 whisper.cpp 提供的命令行工具(main 可执行文件),以下是 Windows 环境下的核心使用说明,适配轻量级、CPU 优先的本地化语音识别需求:
一、 基础命令格式
bash
运行
./main -m models/ggml-base.en.bin -f audio.wav
核心参数说明:
-m <model-path>:指定模型文件路径(必须项,需提前下载对应量化模型,如ggml-base.en.bin)-f <audio-path>:指定待识别的音频文件路径(支持 WAV/MP3 等,推荐 16kHz 单声道 WAV)
二、 关键优化参数(适配 CPU 优先 + 实时需求)
-
量化与低资源配置
bash
运行
./main -m models/ggml-tiny.en.bin -f audio.wav -c 0 --threads 4-c <ctx-size>:设置上下文窗口大小,-c 0自动适配,降低内存占用--threads <num>:指定 CPU 线程数(建议设为 CPU 核心数的一半,平衡速度与资源消耗)- 模型选择:优先选
tiny/base级别的英文模型(ggml-tiny.en.bin),体积小、识别快,适合垂直领域指令
-
实时语音识别(麦克风输入)
bash
运行
./main -m models/ggml-base.en.bin --mic 1 --step 500 --length 5000--mic <device-id>:指定麦克风设备 ID(Windows 下可通过./main --list-mics查看设备列表)--step <ms>:实时识别的步长(单位毫秒,越小越实时,建议 500)--length <ms>:每次识别的音频长度(建议 5000,即 5 秒)
-
垂直领域优化(商业服务指令)
bash
运行
./main -m models/ggml-base.en.bin -f audio.wav -k 10 --prompt "收款 配镜 验光 取镜"--prompt <text>:添加领域关键词提示,引导模型优先识别商业服务相关指令-k <max-words>:限制输出最大词数,适配短语音指令场景
三、 输出与格式控制
-
-otxt:将识别结果保存为 TXT 文件 -
-ojson:输出 JSON 格式结果(便于程序调用)bash
运行
./main -m models/ggml-base.en.bin -f audio.wav -ojson
命令行全说明
usage: D:\ai\asr\whisper64\whisper-cli.exe [options] file0 file1 ...
supported audio formats: flac, mp3, ogg, wav
options:
-h, --help [default] show this help message and exit
-t N, --threads N [4 ] number of threads to use during computation
-p N, --processors N [1 ] number of processors to use during computation
-ot N, --offset-t N [0 ] time offset in milliseconds
-on N, --offset-n N [0 ] segment index offset
-d N, --duration N [0 ] duration of audio to process in milliseconds
-mc N, --max-context N [-1 ] maximum number of text context tokens to store
-ml N, --max-len N [0 ] maximum segment length in characters
-sow, --split-on-word [false ] split on word rather than on token
-bo N, --best-of N [5 ] number of best candidates to keep
-bs N, --beam-size N [5 ] beam size for beam search
-ac N, --audio-ctx N [0 ] audio context size (0 - all)
-wt N, --word-thold N [0.01 ] word timestamp probability threshold
-et N, --entropy-thold N [2.40 ] entropy threshold for decoder fail
-lpt N, --logprob-thold N [-1.00 ] log probability threshold for decoder fail
-nth N, --no-speech-thold N [0.60 ] no speech threshold
-tp, --temperature N [0.00 ] The sampling temperature, between 0 and 1
-tpi, --temperature-inc N [0.20 ] The increment of temperature, between 0 and 1
-debug, --debug-mode [false ] enable debug mode (eg. dump log_mel)
-tr, --translate [false ] translate from source language to english
-di, --diarize [false ] stereo audio diarization
-tdrz, --tinydiarize [false ] enable tinydiarize (requires a tdrz model)
-nf, --no-fallback [false ] do not use temperature fallback while decoding
-otxt, --output-txt [false ] output result in a text file
-ovtt, --output-vtt [false ] output result in a vtt file
-osrt, --output-srt [false ] output result in a srt file
-olrc, --output-lrc [false ] output result in a lrc file
-owts, --output-words [false ] output script for generating karaoke video
-fp, --font-path [/System/Library/Fonts/Supplemental/Courier New Bold.ttf] path to a monospace font for karaoke video
-ocsv, --output-csv [false ] output result in a CSV file
-oj, --output-json [false ] output result in a JSON file
-ojf, --output-json-full [false ] include more information in the JSON file
-of FNAME, --output-file FNAME [ ] output file path (without file extension)
-np, --no-prints [false ] do not print anything other than the results
-ps, --print-special [false ] print special tokens
-pc, --print-colors [false ] print colors
--print-confidence [false ] print confidence
-pp, --print-progress [false ] print progress
-nt, --no-timestamps [false ] do not print timestamps
-l LANG, --language LANG [en ] spoken language ('auto' for auto-detect)
-dl, --detect-language [false ] exit after automatically detecting language
--prompt PROMPT [ ] initial prompt (max n_text_ctx/2 tokens)
--carry-initial-prompt [false ] always prepend initial prompt
-m FNAME, --model FNAME [models/ggml-base.en.bin] model path
-f FNAME, --file FNAME [ ] input audio file path
-oved D, --ov-e-device DNAME [CPU ] the OpenVINO device used for encode inference
-dtw MODEL --dtw MODEL [ ] compute token-level timestamps
-ls, --log-score [false ] log best decoder scores of tokens
-ng, --no-gpu [false ] disable GPU
-fa, --flash-attn [true ] enable flash attention
-nfa, --no-flash-attn [false ] disable flash attention
-sns, --suppress-nst [false ] suppress non-speech tokens
--suppress-regex REGEX [ ] regular expression matching tokens to suppress
--grammar GRAMMAR [ ] GBNF grammar to guide decoding
--grammar-rule RULE [ ] top-level GBNF grammar rule name
--grammar-penalty N [100.0 ] scales down logits of nongrammar tokens
Voice Activity Detection (VAD) options:
--vad [false ] enable Voice Activity Detection (VAD)
-vm FNAME, --vad-model FNAME [ ] VAD model path
-vt N, --vad-threshold N [0.50 ] VAD threshold for speech recognition
-vspd N, --vad-min-speech-duration-ms N [250 ] VAD min speech duration (0.0-1.0)
-vsd N, --vad-min-silence-duration-ms N [100 ] VAD min silence duration (to split segments)
-vmsd N, --vad-max-speech-duration-s N [FLT_MAX] VAD max speech duration (auto-split longer)
-vp N, --vad-speech-pad-ms N [30 ] VAD speech padding (extend segments)
-vo N, --vad-samples-overlap N [0.10 ] VAD samples overlap (seconds between segments)
阿雪技术观
让我们积极投身于技术共享的浪潮中,不仅仅是作为受益者,更要成为贡献者。无论是分享自己的代码、撰写技术博客,还是参与开源项目的维护和改进,每一个小小的举动都可能成为推动技术进步的巨大力量
Embrace open source and sharing, witness the miracle of technological progress, and enjoy the happy times of humanity! Let's actively join the wave of technology sharing. Not only as beneficiaries, but also as contributors. Whether sharing our own code, writing technical blogs, or participating in the maintenance and improvement of open source projects, every small action may become a huge force driving technological progrss.