Georgi Gerganov - ggml - llama.cpp - whisper.cpp

Georgi Gerganov - ggml - llama.cpp - whisper.cpp

  • [1. Georgi Gerganov](#1. Georgi Gerganov)
    • [1.1. Projects](#1.1. Projects)
  • [2. `ggml`](#2. ggml)
  • [3. `llama.cpp`](#3. llama.cpp)
  • [4. `whisper.cpp`](#4. whisper.cpp)
  • References

1. Georgi Gerganov

https://github.com/ggerganov
https://ggerganov.com/

ggml-org
https://github.com/ggml-org

GGML - AI at the edge
https://ggml.ai/

ggml.ai is a company founded by Georgi Gerganov to support the development of ggml.

ggml is a tensor library for machine learning to enable large models and high performance on commodity hardware. It is used by llama.cpp and whisper.cpp.

1.1. Projects

  • whisper.cpp

https://github.com/ggerganov/whisper.cpp

High-performance inference of OpenAI's Whisper automatic speech recognition model

The project provides a high-quality speech-to-text solution that runs on Mac, Windows, Linux, iOS, Android, Raspberry Pi, and Web

  • llama.cpp

https://github.com/ggerganov/llama.cpp

Inference of Meta's LLaMA model (and others) in pure C/C++

The project provides efficient inference across a wide range of hardware and serves as the foundation for numerous LLM-based applications

2. ggml

https://github.com/ggerganov/ggml

Tensor library for machine learning

Some of the development is currently happening in the llama.cpp and whisper.cpp repos.

复制代码
sync : llama.cpp
sync : whisper.cpp

3. llama.cpp

https://github.com/ggerganov/llama.cpp

Inference of Meta's LLaMA model (and others) in pure C/C++

The llama.cpp project is the main playground for developing new features for the ggml library.

复制代码
sync : ggml

4. whisper.cpp

https://github.com/ggerganov/whisper.cpp

High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model.

The entire high-level implementation of the model is contained in whisper.h and whisper.cpp. The rest of the code is part of the ggml machine learning library.

复制代码
sync : ggml
sync : ggml + llama.cpp
sync : ggml and llama.cpp

References

1\] Yongqiang Cheng, \[2\] Introduction to ggml,

相关推荐
illuspas2 天前
MI50运行GLM-4.7-Flash的速度测试
glm·llama.cpp·mi50
容沁风2 天前
openclaw使用本地llama.cpp
llama.cpp·qwen3·openclaw
love530love4 天前
Windows 11 配置 CUDA 版 llama.cpp 并实现系统全局调用(GGUF 模型本地快速聊天)
人工智能·windows·大模型·llama·llama.cpp·gguf·cuda 加速
leida_wt1 个月前
新版llama.cpp在win7系统的移植与编译
编译·llama.cpp·win7
视图猿人1 个月前
使用LLama.cpp本地部署大模型
llama.cpp
skywalk81632 个月前
GLM-edge-1.5B-chat 一个特别的cpu可以推理的小型llm模型
人工智能·ollama·llama.cpp
TGITCIC2 个月前
LLM推理引擎选型实战指南:用Transformers、llama.cpp 还是 vLLM 之争
transformer·llama·ai大模型·vllm·llama.cpp·大模型ai
容沁风3 个月前
llama.cpp作为crewAI的模型后端
llama.cpp·crewai
喜欢吃豆3 个月前
llama.cpp 全方位技术指南:从底层原理到实战部署
人工智能·语言模型·大模型·llama·量化·llama.cpp
喜欢吃豆3 个月前
掌握本地化大语言模型部署:llama.cpp 工作流与 GGUF 转换内核全面技术指南
人工智能·语言模型·架构·大模型·llama·llama.cpp·gguf