Georgi Gerganov - ggml - llama.cpp - whisper.cpp

Georgi Gerganov - ggml - llama.cpp - whisper.cpp

  • [1. Georgi Gerganov](#1. Georgi Gerganov)
    • [1.1. Projects](#1.1. Projects)
  • [2. `ggml`](#2. ggml)
  • [3. `llama.cpp`](#3. llama.cpp)
  • [4. `whisper.cpp`](#4. whisper.cpp)
  • References

1. Georgi Gerganov

https://github.com/ggerganov
https://ggerganov.com/

ggml-org
https://github.com/ggml-org

GGML - AI at the edge
https://ggml.ai/

ggml.ai is a company founded by Georgi Gerganov to support the development of ggml.

ggml is a tensor library for machine learning to enable large models and high performance on commodity hardware. It is used by llama.cpp and whisper.cpp.

1.1. Projects

  • whisper.cpp

https://github.com/ggerganov/whisper.cpp

High-performance inference of OpenAI's Whisper automatic speech recognition model

The project provides a high-quality speech-to-text solution that runs on Mac, Windows, Linux, iOS, Android, Raspberry Pi, and Web

  • llama.cpp

https://github.com/ggerganov/llama.cpp

Inference of Meta's LLaMA model (and others) in pure C/C++

The project provides efficient inference across a wide range of hardware and serves as the foundation for numerous LLM-based applications

2. ggml

https://github.com/ggerganov/ggml

Tensor library for machine learning

Some of the development is currently happening in the llama.cpp and whisper.cpp repos.

复制代码
sync : llama.cpp
sync : whisper.cpp

3. llama.cpp

https://github.com/ggerganov/llama.cpp

Inference of Meta's LLaMA model (and others) in pure C/C++

The llama.cpp project is the main playground for developing new features for the ggml library.

复制代码
sync : ggml

4. whisper.cpp

https://github.com/ggerganov/whisper.cpp

High-performance inference of OpenAI's Whisper automatic speech recognition (ASR) model.

The entire high-level implementation of the model is contained in whisper.h and whisper.cpp. The rest of the code is part of the ggml machine learning library.

复制代码
sync : ggml
sync : ggml + llama.cpp
sync : ggml and llama.cpp

References

1\] Yongqiang Cheng, \[2\] Introduction to ggml,

相关推荐
爱听歌的周童鞋4 天前
理解llama.cpp如何进行LLM推理
llm·llama·llama.cpp·inference
o0o_-_23 天前
【langchain/入门】使用langchain调用本地部署的大模型(以llama.cpp以及ollama为例)
langchain·ollama·llama.cpp·deepseek
月光技术杂谈24 天前
llama.cpp 利用intel集成显卡xpu加速推理
人工智能·python·llama·intel·llama.cpp·xpu·集成显卡
Yongqiang Cheng2 个月前
llama.cpp GGML Quantization Type
quantization·llama.cpp·ggml
Yongqiang Cheng2 个月前
llama.cpp GGUF 模型格式
llama.cpp·gguf 模型格式
Yongqiang Cheng2 个月前
llama.cpp LLM_CHAT_TEMPLATE_DEEPSEEK_3
llama.cpp·deepseek_3
Yongqiang Cheng2 个月前
llama.cpp LLM_ARCH_DEEPSEEK and LLM_ARCH_DEEPSEEK2
llama.cpp·deepseek·deepseek2
Yongqiang Cheng2 个月前
llama.cpp LLM_ARCH_LLAMA
llama.cpp·arch_llama
Yongqiang Cheng2 个月前
llama.cpp LLM_ARCH_NAMES
llama.cpp·arch_names
Yongqiang Cheng3 个月前
llama.cpp Sampling API
llama.cpp·sampling api