A. 大模型资源
B. 使用教程
HuggingFace
- HuggingFace 10分钟快速入门(一),利用Transformers,Pipeline探索AI。_哔哩哔哩_bilibili
- HuggingFace快速入门(二),利用AI模型快速搭建APP。_哔哩哔哩_bilibili
- HuggingFace 快速入门(三),HF的Agent快速搭建AI Agent_哔哩哔哩_bilibili
- HuggingFace 快速入门(四),玩转HF上的模型_哔哩哔哩_bilibili
HF-Mirror
bash
huggingface-cli download --resume-download mistralai/Mistral-7B-Instruct-v0.2 --local-dir Mistral-7B-Instruct-v0.2 --local-dir-use-symlinks False
huggingface-cli download --resume-download google-t5/t5-small --local-dir t5-small --local-dir-use-symlinks False
huggingface-cli download --resume-download openai/whisper-large-v3 --local-dir whisper-large-v3 --local-dir-use-symlinks False
huggingface-cli download --resume-download openai/whisper-large-v3 --local-dir whisper-large-v3 --local-dir-use-symlinks False
huggingface-cli download --resume-download openai/clip-vit-base-patch32 --local-dir clip-vit-base-patch32 --local-dir-use-symlinks False
huggingface-cli download --resume-download openai/clip-vit-large-patch14 --local-dir clip-vit-large-patch14 --local-dir-use-symlinks False
huggingface-cli download --resume-download keremberke/yolov8m-table-extraction --local-dir yolov8m-table-extraction --local-dir-use-symlinks False
huggingface-cli download --resume-download merve/yolov9 --local-dir yolov9 --local-dir-use-symlinks False
huggingface-cli download --resume-download stabilityai/stable-code-instruct-3b --local-dir stable-code-instruct-3b --local-dir-use-symlinks False
huggingface-cli download --resume-download stabilityai/stable-code-3b --local-dir stable-code-3b--local-dir-use-symlinks False
huggingface-cli download --resume-download defog/sqlcoder-7b-2 --local-dir sqlcoder-7b-2 --local-dir-use-symlinks False
魔搭社区
python
from modelscope.hub.snapshot_download import snapshot_download
model_dir = snapshot_download('iic/speech_fsmn_vad_zh-cn-16k-common-pytorch', cache_dir='speech_fsmn_vad_zh-cn-16k-common-pytorch')
model_dir = snapshot_download('qwen/Qwen1.5-MoE-A2.7B', cache_dir='Qwen1.5-MoE-A2.7B')
model_dir = snapshot_download('iic/speech_eres2net_large_sv_zh-cn_cnceleb_16k', cache_dir='speech_eres2net_large_sv_zh-cn_cnceleb_16k')
model_dir = snapshot_download('iic/cv_ddsar_face-detection_iclr23-damofd', cache_dir='cv_ddsar_face-detection_iclr23-damofd')
model_dir = snapshot_download('iic/Whisper-large-v3', cache_dir='Whisper-large-v3-iic')
model_dir = snapshot_download('iic/nlp_bart_text-error-correction_chinese', cache_dir='nlp_bart_text-error-correction_chinese')
model_dir = snapshot_download('iic/nlp_bart_text-error-correction_chinese-law', cache_dir='nlp_bart_text-error-correction_chinese-law')
model_dir = snapshot_download('iic/speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch', cache_dir='speech_paraformer-large_asr_nat-zh-cn-16k-common-vocab8404-pytorch')
model_dir = snapshot_download('iic/cv_dla34_table-structure-recognition_cycle-centernet', cache_dir='cv_dla34_table-structure-recognition_cycle-centernet')
model_dir = snapshot_download('iic/cv_resnet-transformer_table-structure-recognition_lore', cache_dir='cv_resnet-transformer_table-structure-recognition_lore')
model_dir = snapshot_download('iic/cv_convnextTiny_ocr-recognition-general_damo', cache_dir='cv_convnextTiny_ocr-recognition-general_damo')
model_dir = snapshot_download('iic/cv_resnet18_ocr-detection-db-line-level_damo', cache_dir='cv_resnet18_ocr-detection-db-line-level_damo')
model_dir = snapshot_download('iic/cv_convnextTiny_ocr-recognition-document_damo', cache_dir='cv_convnextTiny_ocr-recognition-document_damo')
model_dir = snapshot_download('iic/cv_resnet18_ocr-detection-line-level_damo', cache_dir='cv_resnet18_ocr-detection-line-level_damo')
C. 部署方式
- 一键部署Google开源大模型Gemma,性能远超Mistral、LLama2 | 本地大模型部署,ollama助您轻松完成!_哔哩哔哩_bilibili
- Windows系统本机运行Gemma最简步骤_哔哩哔哩_bilibili
- ollama如何把gemma模型部署到D盘_哔哩哔哩_bilibili
- Ollama如何把Gemma模型部署到D盘_方法二创建目录链接释放C盘空间_哔哩哔哩_bilibili
- Ollama如何使用现有的GGUF文件_哔哩哔哩_bilibili
- Ollama导入GGUF文件_哔哩哔哩_bilibili
- Ollama wsarecv: An existing connection was forcibly closed by the remote host._哔哩哔哩_bilibili
- OLLAMA_KEEP_ALIVE潜在问题分析_哔哩哔哩_bilibili
- Ollama 运行 GGUF 模型
- LangChain 与 ollama 携手python 环境演示 Hello World 构建属于自己的大模型应用
- 人工智能大模型系列(七)使用 Ollama 和 CodeGPT 在 VSCode 中创建您自己的自定义 Copilot
D. 格式转换
E. 代码生成
- LLM 系列 | 21 : Code Llama实战(上篇) : 模型简介与试用
- LLM系列 | 22 : Code Llama实战(下篇):本地部署、量化及GPT-4对比
- Stability AI开源3B代码生成模型:可补全,还能Debug
F. YOLOV9
G. SAM
H. Qwen1.5-MoE
Github官方
GitHub - QwenLM/Qwen1.5: Qwen1.5 is the improved version of Qwen, the large language model series developed by Qwen team, Alibaba Cloud.GPTQ版本运行失败
Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能_哔哩哔哩_bilibili对应Chat版本的代码
Qwen1.5-MoE: 1/3的激活参数量达到7B模型的性能 - 哔哩哔哩微信资料
开源MOE再添一员:通义团队Qwen1.5 MOE A2.7B大模型如何选择模型
Qwen1.5系列6个模型如何选择? AWQ还是GPTQ?#小工蚁_哔哩哔哩_bilibili不同型号显卡在同一台机器上通过vllm加速推理
双4090部署qwen72b大模型 每秒150tokens_哔哩哔哩_bilibiliQwen-Agent及浏览器插件
GitHub - QwenLM/Qwen-Agent: Agent framework and applications built upon Qwen1.5, featuring Function Calling, Code Interpreter, RAG, and Chrome extension.