使用ffmpeg8.0的whisper模块语音识别

2025年9月ffmpeg8.0发布,这个版本将whisper.cpp内置到了audio filter。最新版本的ffmpeg默认支持whisper模块。

以下是模块的可选参数,参数之间用:分隔,用=设置值。例如 :vad_threshold=0.3

model: The file path of the downloaded whisper.cpp model (mandatory).
language: The language to use for transcription ('auto' for auto-detect). Default value: "auto"
queue: The maximum size that will be queued into the filter before processing the audio with whisper. Using a small value the audio stream will be processed more often, but the transcription quality will be lower and the required processing power will be higher. Using a large value (e.g. 10-20s) will produce more accurate results using less CPU (as using the whisper-cli tool), but the transcription latency will be higher, thus not useful to process real-time streams. Consider using the vad_model option associated with a large queue value. Default value: "3"
use_gpu: If the GPU support should be enabled. Default value: "true"
gpu_device: The GPU device index to use. Default value: "0"
destination: If set, the transcription output will be sent to the specified file or URL (use one of the FFmpeg AVIO protocols); otherwise, the output will be logged as info messages. The output will also be set in the "lavfi.whisper.text" frame metadata. If the destination is a file and it already exists, it will be overwritten.
format: The destination format string; it could be "text" (only the transcribed text will be sent to the destination), "srt" (subtitle format) or "json". Default value: "text"
vad_model: Path to the VAD model file. If set, the filter will load an additional voice activity detection module (https://github.com/snakers4/silero-vad) that will be used to fragment the audio queue; use this option setting a valid path obtained from the whisper.cpp repository (e.g. "../whisper.cpp/models/ggml-silero-v5.1.2.bin") and increase the queue parameter to a higher value (e.g. 20).
vad_threshold: The VAD threshold to use. Default value: "0.5"
vad_min_speech_duration: The minimum VAD speaking duration. Default value: "0.1"
**vad_min_silence_duration:**The minimum VAD silence duration. Default value: "0.5"

复制代码
举例说明使用方法:
ffmpeg -i input.mp4 -vn -af "whisper=model=../whisper.cpp/models/ggml-base.en.bin\
:language=en\
:queue=3\
:destination=output.srt\
:format=srt" -f null -

ffmpeg官方网站文档:https://ayosec.github.io/ffmpeg-filters-docs/8.0/Filters/Audio/whisper.html

再举一个例子:

ffmpeg -i H:\a.mp4 -vn -af "whisper=model=./models/ggml-medium.bin :language=auto :queue=3 :destination=./output.srt :format=srt :vad_model=./models/ggml-silero-v5.1.2.bin :vad_threshold=0.3" -f null -

但是经过测试,都使用ggml-medium.bin模型的情况下,识别效果不如先使用ffmpeg提取音频生成mp3文件,再使用whisper.cpp的whisper-cli.exe生成字幕文件。方法如下:

ffmpeg -i /path/to/video.mp4 -af aresample=async=1 -ar 16000 -ac 1 -c:a pcm_s16le -loglevel fatal /path/to/audio.mp3

./whisper-cli.exe -l auto -osrt --vad --vad-threshold 0.3 --vad-model .\models\ggml-silero-v5.1.2.bin -m .\models\ggml-medium.bin H:\a.mp3

推荐使用mp3格式 ,mp3格式的生成的文字有标点符号,wav格式的没有标点符号。

相关推荐
想你依然心痛13 小时前
鲲鹏+昇腾:开启 AI for Science 新范式——基于PINN的流体仿真加速实践
人工智能·鲲鹏·昇腾
蓝眸少年CY13 小时前
SpringAI+Deepseek大模型应用实战
人工智能
程序员欣宸13 小时前
LangChain4j实战之十二:结构化输出之三,json模式
java·人工智能·ai·json·langchain4j
极小狐13 小时前
智谱上市!当 GLM-4.7 遇上 CodeRider :演示何为「1+1>2」的巅峰效能
人工智能·ai编程
sunfove13 小时前
贝叶斯模型 (Bayesian Model) 的直觉与硬核原理
人工智能·机器学习·概率论
q_302381955613 小时前
Atlas200DK 部署 yolov11 调用海康威视摄像头实现实时目标检测
人工智能·yolo·目标检测
故乡de云13 小时前
Vertex AI 企业账号体系,Google Cloud 才能完整支撑
大数据·人工智能
汽车仪器仪表相关领域13 小时前
AI赋能智能检测,引领灯光检测新高度——NHD-6109智能全自动远近光检测仪项目实战分享
大数据·人工智能·功能测试·机器学习·汽车·可用性测试·安全性测试
brave and determined13 小时前
工程设计类学习(DAY4):硬件可靠性测试全攻略:标准到实战
人工智能·嵌入式硬件·测试·硬件设计·可靠性测试·嵌入式设计·可靠性方法
Stuomasi_xiaoxin13 小时前
ROS2介绍,及ubuntu22.04 安装ROS 2部署使用!
linux·人工智能·深度学习·ubuntu