源码编译llama.cpp for android

源码编译llama.cpp for android

我这有已经编译好的版本,直接下载使用:

https://github.com/turingevo/llama.cpp-build/releases/tag/b4331

准备 android-ndk

已下载:

bash 复制代码
/media/wmx/ws1/software/qtAndroid/Sdk/ndk/23.1.7779620

版本 : llama.cpp-b4331

下载源码

切换到 llama.cpp/目录

编译脚本 llama.cpp/build-android.sh

bash 复制代码
#!/bin/bash

ANDROID_NDK_PATH=/media/wmx/ws1/software/qtAndroid/Sdk/ndk/23.1.7779620
build_dir=build-android
src_dir=.
install_dir=bin/android

cmake \
  -DCMAKE_TOOLCHAIN_FILE=${ANDROID_NDK_PATH}/build/cmake/android.toolchain.cmake \
  -DANDROID_ABI=arm64-v8a \
  -DANDROID_PLATFORM=android-28 \
  -DCMAKE_C_FLAGS="-march=armv8.7a" \
  -DCMAKE_CXX_FLAGS="-march=armv8.7a" \
  -DGGML_OPENMP=OFF \
  -DGGML_LLAMAFILE=OFF \
  -B ${build_dir} \
  -S ${src_dir}


cmake --build ${build_dir} --config Release -j48


cmake --install ${build_dir} --prefix ${install_dir} --config Release

push 到android设备测试

下面是 华为mate40pro 上的测试结果

build llama.cpp/bin/android

shell 复制代码
adb shell "mkdir /data/local/tmp/llama.cpp"
adb push bin/android /data/local/tmp/llama.cpp/
adb push qwen2.5-0.5b-instruct-q4_k_m.gguf /data/local/tmp/llama.cpp/


adb shell
cd /data/local/tmp/llama.cpp/android

touch test.sh
chmod a+x test.sh
cat " LD_LIBRARY_PATH=lib ./bin/llama-simple -m qwen2.5-0.5b-instruct-q4_k_m.gguf -p \"你是谁?\"  "  > test.sh

./test.sh


HWNOH:/data/local/tmp/llama.cpp/android $ ./test.sh                                                                                           
llama_model_loader: loaded meta data with 26 key-value pairs and 291 tensors from /sdcard/a-wmx/models/qwen2.5-0.5b-instruct-q4_k_m.gguf (version GGUF V3 (latest))
llama_model_loader: Dumping metadata keys/values. Note: KV overrides do not apply in this output.
llama_model_loader: - kv   0:                       general.architecture str              = qwen2
llama_model_loader: - kv   1:                               general.type str              = model
llama_model_loader: - kv   2:                               general.name str              = qwen2.5-0.5b-instruct
llama_model_loader: - kv   3:                            general.version str              = v0.1
llama_model_loader: - kv   4:                           general.finetune str              = qwen2.5-0.5b-instruct
llama_model_loader: - kv   5:                         general.size_label str              = 630M
llama_model_loader: - kv   6:                          qwen2.block_count u32              = 24
llama_model_loader: - kv   7:                       qwen2.context_length u32              = 32768
llama_model_loader: - kv   8:                     qwen2.embedding_length u32              = 896
llama_model_loader: - kv   9:                  qwen2.feed_forward_length u32              = 4864
llama_model_loader: - kv  10:                 qwen2.attention.head_count u32              = 14
llama_model_loader: - kv  11:              qwen2.attention.head_count_kv u32              = 2
llama_model_loader: - kv  12:                       qwen2.rope.freq_base f32              = 1000000.000000
llama_model_loader: - kv  13:     qwen2.attention.layer_norm_rms_epsilon f32              = 0.000001
llama_model_loader: - kv  14:                          general.file_type u32              = 15
llama_model_loader: - kv  15:                       tokenizer.ggml.model str              = gpt2
llama_model_loader: - kv  16:                         tokenizer.ggml.pre str              = qwen2
llama_model_loader: - kv  17:                      tokenizer.ggml.tokens arr[str,151936]  = ["!", "\"", "#", "$", "%", "&", "'", ...
llama_model_loader: - kv  18:                  tokenizer.ggml.token_type arr[i32,151936]  = [1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, 1, ...
llama_model_loader: - kv  19:                      tokenizer.ggml.merges arr[str,151387]  = ["Ġ Ġ", "ĠĠ ĠĠ", "i n", "Ġ t",...
llama_model_loader: - kv  20:                tokenizer.ggml.eos_token_id u32              = 151645
llama_model_loader: - kv  21:            tokenizer.ggml.padding_token_id u32              = 151643
llama_model_loader: - kv  22:                tokenizer.ggml.bos_token_id u32              = 151643
llama_model_loader: - kv  23:               tokenizer.ggml.add_bos_token bool             = false
llama_model_loader: - kv  24:                    tokenizer.chat_template str              = {%- if tools %}\n    {{- '<|im_start|>...
llama_model_loader: - kv  25:               general.quantization_version u32              = 2
llama_model_loader: - type  f32:  121 tensors
llama_model_loader: - type q5_0:  133 tensors
llama_model_loader: - type q8_0:   13 tensors
llama_model_loader: - type q4_K:   12 tensors
llama_model_loader: - type q6_K:   12 tensors
llm_load_vocab: control token: 151659 '<|fim_prefix|>' is not marked as EOG
llm_load_vocab: control token: 151656 '<|video_pad|>' is not marked as EOG
llm_load_vocab: control token: 151655 '<|image_pad|>' is not marked as EOG
llm_load_vocab: control token: 151653 '<|vision_end|>' is not marked as EOG
llm_load_vocab: control token: 151652 '<|vision_start|>' is not marked as EOG
llm_load_vocab: control token: 151651 '<|quad_end|>' is not marked as EOG
llm_load_vocab: control token: 151649 '<|box_end|>' is not marked as EOG
llm_load_vocab: control token: 151648 '<|box_start|>' is not marked as EOG
llm_load_vocab: control token: 151646 '<|object_ref_start|>' is not marked as EOG
llm_load_vocab: control token: 151644 '<|im_start|>' is not marked as EOG
llm_load_vocab: control token: 151661 '<|fim_suffix|>' is not marked as EOG
llm_load_vocab: control token: 151647 '<|object_ref_end|>' is not marked as EOG
llm_load_vocab: control token: 151660 '<|fim_middle|>' is not marked as EOG
llm_load_vocab: control token: 151654 '<|vision_pad|>' is not marked as EOG
llm_load_vocab: control token: 151650 '<|quad_start|>' is not marked as EOG
llm_load_vocab: special tokens cache size = 22
llm_load_vocab: token to piece cache size = 0.9310 MB
llm_load_print_meta: format           = GGUF V3 (latest)
llm_load_print_meta: arch             = qwen2
llm_load_print_meta: vocab type       = BPE
llm_load_print_meta: n_vocab          = 151936
llm_load_print_meta: n_merges         = 151387
llm_load_print_meta: vocab_only       = 0
llm_load_print_meta: n_ctx_train      = 32768
llm_load_print_meta: n_embd           = 896
llm_load_print_meta: n_layer          = 24
llm_load_print_meta: n_head           = 14
llm_load_print_meta: n_head_kv        = 2
llm_load_print_meta: n_rot            = 64
llm_load_print_meta: n_swa            = 0
llm_load_print_meta: n_embd_head_k    = 64
llm_load_print_meta: n_embd_head_v    = 64
llm_load_print_meta: n_gqa            = 7
llm_load_print_meta: n_embd_k_gqa     = 128
llm_load_print_meta: n_embd_v_gqa     = 128
llm_load_print_meta: f_norm_eps       = 0.0e+00
llm_load_print_meta: f_norm_rms_eps   = 1.0e-06
llm_load_print_meta: f_clamp_kqv      = 0.0e+00
llm_load_print_meta: f_max_alibi_bias = 0.0e+00
llm_load_print_meta: f_logit_scale    = 0.0e+00
llm_load_print_meta: n_ff             = 4864
llm_load_print_meta: n_expert         = 0
llm_load_print_meta: n_expert_used    = 0
llm_load_print_meta: causal attn      = 1
llm_load_print_meta: pooling type     = 0
llm_load_print_meta: rope type        = 2
llm_load_print_meta: rope scaling     = linear
llm_load_print_meta: freq_base_train  = 1000000.0
llm_load_print_meta: freq_scale_train = 1
llm_load_print_meta: n_ctx_orig_yarn  = 32768
llm_load_print_meta: rope_finetuned   = unknown
llm_load_print_meta: ssm_d_conv       = 0
llm_load_print_meta: ssm_d_inner      = 0
llm_load_print_meta: ssm_d_state      = 0
llm_load_print_meta: ssm_dt_rank      = 0
llm_load_print_meta: ssm_dt_b_c_rms   = 0
llm_load_print_meta: model type       = 1B
llm_load_print_meta: model ftype      = Q4_K - Medium
llm_load_print_meta: model params     = 630.17 M
llm_load_print_meta: model size       = 462.96 MiB (6.16 BPW) 
llm_load_print_meta: general.name     = qwen2.5-0.5b-instruct
llm_load_print_meta: BOS token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOS token        = 151645 '<|im_end|>'
llm_load_print_meta: EOT token        = 151645 '<|im_end|>'
llm_load_print_meta: PAD token        = 151643 '<|endoftext|>'
llm_load_print_meta: LF token         = 148848 'ÄĬ'
llm_load_print_meta: FIM PRE token    = 151659 '<|fim_prefix|>'
llm_load_print_meta: FIM SUF token    = 151661 '<|fim_suffix|>'
llm_load_print_meta: FIM MID token    = 151660 '<|fim_middle|>'
llm_load_print_meta: FIM PAD token    = 151662 '<|fim_pad|>'
llm_load_print_meta: FIM REP token    = 151663 '<|repo_name|>'
llm_load_print_meta: FIM SEP token    = 151664 '<|file_sep|>'
llm_load_print_meta: EOG token        = 151643 '<|endoftext|>'
llm_load_print_meta: EOG token        = 151645 '<|im_end|>'
llm_load_print_meta: EOG token        = 151662 '<|fim_pad|>'
llm_load_print_meta: EOG token        = 151663 '<|repo_name|>'
llm_load_print_meta: EOG token        = 151664 '<|file_sep|>'
llm_load_print_meta: max token length = 256
llm_load_tensors: tensor 'token_embd.weight' (q5_0) (and 290 others) cannot be used with preferred buffer type CPU_AARCH64, using CPU instead
llm_load_tensors:   CPU_Mapped model buffer size =   462.96 MiB
.....................................................
llama_new_context_with_model: n_batch is less than GGML_KQ_MASK_PAD - increasing to 32
llama_new_context_with_model: n_seq_max     = 1
llama_new_context_with_model: n_ctx         = 64
llama_new_context_with_model: n_ctx_per_seq = 64
llama_new_context_with_model: n_batch       = 32
llama_new_context_with_model: n_ubatch      = 32
llama_new_context_with_model: flash_attn    = 0
llama_new_context_with_model: freq_base     = 1000000.0
llama_new_context_with_model: freq_scale    = 1
llama_new_context_with_model: n_ctx_per_seq (64) < n_ctx_train (32768) -- the full capacity of the model will not be utilized
llama_kv_cache_init:        CPU KV buffer size =     0.75 MiB
llama_new_context_with_model: KV self size  =    0.75 MiB, K (f16):    0.38 MiB, V (f16):    0.38 MiB
llama_new_context_with_model:        CPU  output buffer size =     0.58 MiB
llama_new_context_with_model:        CPU compute buffer size =    18.66 MiB
llama_new_context_with_model: graph nodes  = 846
llama_new_context_with_model: graph splits = 1
-p 你是谁?我是阿里云开发的超大规模语言模型,我叫通义千问。通义是"通义天下",千问是"千问天下
main: decoded 32 tokens in 2.21 s, speed: 14.49 t/s

llama_perf_sampler_print:    sampling time =       5.69 ms /    32 runs   (    0.18 ms per token,  5622.91 tokens per second)
llama_perf_context_print:        load time =    1907.15 ms
llama_perf_context_print: prompt eval time =     165.11 ms /     5 tokens (   33.02 ms per token,    30.28 tokens per second)
llama_perf_context_print:        eval time =    2000.08 ms /    31 runs   (   64.52 ms per token,    15.50 tokens per second)
llama_perf_context_print:       total time =    3950.19 ms /    36 tokens
相关推荐
东风西巷26 分钟前
Balabolka:免费高效的文字转语音软件
前端·人工智能·学习·语音识别·软件需求
非门由也36 分钟前
《sklearn机器学习——管道和复合估计器》联合特征(FeatureUnion)
人工智能·机器学习·sklearn
l12345sy36 分钟前
Day21_【机器学习—决策树(1)—信息增益、信息增益率、基尼系数】
人工智能·决策树·机器学习·信息增益·信息增益率·基尼指数
非门由也36 分钟前
《sklearn机器学习——管道和复合估算器》异构数据的列转换器
人工智能·机器学习·sklearn
计算机毕业设计指导1 小时前
基于ResNet50的智能垃圾分类系统
人工智能·分类·数据挖掘
飞哥数智坊1 小时前
终端里用 Claude Code 太难受?我把它接进 TRAE,真香!
人工智能·claude·trae
小王爱学人工智能1 小时前
OpenCV的阈值处理
人工智能·opencv·计算机视觉
新智元2 小时前
刚刚,光刻机巨头 ASML 杀入 AI!豪掷 15 亿押注「欧版 OpenAI」,成最大股东
人工智能·openai
机器之心2 小时前
全球图生视频榜单第一,爱诗科技PixVerse V5如何改变一亿用户的视频创作
人工智能·openai
新智元2 小时前
2025年了,AI还看不懂时钟!90%人都能答对,顶尖AI全军覆没
人工智能·openai