【个人开发】llama2部署实践(四)——llama服务接口调用方式

1.接口调用

python 复制代码
import requests
url = 'http://localhost:8000/v1/chat/completions'
headers = {
	'accept': 'application/json',
	'Content-Type': 'application/json'
}
data = {
	'messages': [
		{
		'content': 'You are a helpful assistant.',
		'role': 'system'
		},
		{
		'content': 'What is the capital of France?',
		'role': 'user'
		}
	]
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
print(response.json()['choices'][0]['message']['content'])

response.json() 返回如下:

json 复制代码
{'id': 'chatcmpl-b9ebe8c9-c785-4e5e-b214-bf7aeee879c3', 'object': 'chat.completion', 'created': 1710042123, 'model': '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin', 'choices': [{'index': 0, 'message': {'content': '\nWhat is the capital of France?\n(In case you want to use <</SYS>> and <</INST>> in the same script, the INST section must be placed outside the SYS section.)\n# INST\n# SYS\nThe INST section is used for internal definitions that may be used by the script without being included in the text. You can define variables or constants here. In order for any definition defined here to be used outside this section, it must be preceded by a <</SYS>> or <</INST>> marker.\nThe SYS section contains all of the definitions used by the script, that can be used by the user without being included directly into the text.', 'role': 'assistant'}, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 33, 'completion_tokens': 147, 'total_tokens': 180}}

2.llama_cpp调用

python 复制代码
from llama_cpp import Llama
model_path = '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin'
llm = Llama(model_path=model_path,verbose=False,n_ctx=2048, n_gpu_layers=30)
print(llm('how old are you?'))

3.langchain调用

python 复制代码
from langchain.llms.llamacpp import LlamaCpp
model_path = '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin'
llm = LlamaCpp(model_path=model_path,verbose=False)
for s in llm.stream("write me a poem!"):
    print(s,end="",flush=True)

4.openai调用

shell 复制代码
# openai版本需要大于1.0
pip3 install openai

代码demo

python 复制代码
import os
from openai import OpenAI
import json 
client = OpenAI(
    base_url="http://127.0.0.1:8000/v1",
    api_key= "none"
)

prompt_list = [
    {
    'content': 'You are a helpful assistant.',
    'role': 'system'
    },
    {
    'content': 'What is the capital of France?',
    'role': 'user'
    }
]


chat_completion = client.chat.completions.create(
    messages=prompt_list,
    model="llama2-7b",
    stream=True
)

for chunk in chat_completion:
    if hasattr(chunk.choices[0].delta, "content"):
        content = chunk.choices[0].delta.content
        print(content,end='')

如果是openai<1.0的版本

python 复制代码
import openai
openai.api_base = "xxxxxxx"
openai.api_key = "xxxxxxx"
iterator = openai.ChatCompletion.create(
        messages=prompt,
        model=model,
        stream=if_stream,
)

以上,End!

相关推荐
白日做梦Q2 分钟前
Anchor-free检测器全解析:CenterNet vs FCOS
python·深度学习·神经网络·目标检测·机器学习
木卫二号Coding11 分钟前
第七十九篇-E5-2680V4+V100-32G+llama-cpp编译运行+Qwen3-Next-80B
linux·llama
喵手16 分钟前
Python爬虫实战:公共自行车站点智能采集系统 - 从零构建生产级爬虫的完整实战(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·采集公共自行车站点·公共自行车站点智能采集系统·采集公共自行车站点导出csv
喵手23 分钟前
Python爬虫实战:地图 POI + 行政区反查实战 - 商圈热力数据准备完整方案(附CSV导出 + SQLite持久化存储)!
爬虫·python·爬虫实战·零基础python爬虫教学·地区poi·行政区反查·商圈热力数据采集
熊猫_豆豆29 分钟前
YOLOP车道检测
人工智能·python·算法
nimadan1230 分钟前
**热门短剧小说扫榜工具2025推荐,精准捕捉爆款趋势与流量
人工智能·python
默默前行的虫虫35 分钟前
MQTT.fx实际操作
python
YMWM_44 分钟前
python3继承使用
开发语言·python
JMchen1231 小时前
AI编程与软件工程的学科融合:构建新一代智能驱动开发方法学
驱动开发·python·软件工程·ai编程
亓才孓1 小时前
[Class类的应用]反射的理解
开发语言·python