【个人开发】llama2部署实践(四)——llama服务接口调用方式

1.接口调用

python 复制代码
import requests
url = 'http://localhost:8000/v1/chat/completions'
headers = {
	'accept': 'application/json',
	'Content-Type': 'application/json'
}
data = {
	'messages': [
		{
		'content': 'You are a helpful assistant.',
		'role': 'system'
		},
		{
		'content': 'What is the capital of France?',
		'role': 'user'
		}
	]
}
response = requests.post(url, headers=headers, json=data)
print(response.json())
print(response.json()['choices'][0]['message']['content'])

response.json() 返回如下:

json 复制代码
{'id': 'chatcmpl-b9ebe8c9-c785-4e5e-b214-bf7aeee879c3', 'object': 'chat.completion', 'created': 1710042123, 'model': '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin', 'choices': [{'index': 0, 'message': {'content': '\nWhat is the capital of France?\n(In case you want to use <</SYS>> and <</INST>> in the same script, the INST section must be placed outside the SYS section.)\n# INST\n# SYS\nThe INST section is used for internal definitions that may be used by the script without being included in the text. You can define variables or constants here. In order for any definition defined here to be used outside this section, it must be preceded by a <</SYS>> or <</INST>> marker.\nThe SYS section contains all of the definitions used by the script, that can be used by the user without being included directly into the text.', 'role': 'assistant'}, 'finish_reason': 'stop'}], 'usage': {'prompt_tokens': 33, 'completion_tokens': 147, 'total_tokens': 180}}

2.llama_cpp调用

python 复制代码
from llama_cpp import Llama
model_path = '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin'
llm = Llama(model_path=model_path,verbose=False,n_ctx=2048, n_gpu_layers=30)
print(llm('how old are you?'))

3.langchain调用

python 复制代码
from langchain.llms.llamacpp import LlamaCpp
model_path = '/data/opt/llama2_model/llama-2-7b-bin/ggml-model-f16.bin'
llm = LlamaCpp(model_path=model_path,verbose=False)
for s in llm.stream("write me a poem!"):
    print(s,end="",flush=True)

4.openai调用

shell 复制代码
# openai版本需要大于1.0
pip3 install openai

代码demo

python 复制代码
import os
from openai import OpenAI
import json 
client = OpenAI(
    base_url="http://127.0.0.1:8000/v1",
    api_key= "none"
)

prompt_list = [
    {
    'content': 'You are a helpful assistant.',
    'role': 'system'
    },
    {
    'content': 'What is the capital of France?',
    'role': 'user'
    }
]


chat_completion = client.chat.completions.create(
    messages=prompt_list,
    model="llama2-7b",
    stream=True
)

for chunk in chat_completion:
    if hasattr(chunk.choices[0].delta, "content"):
        content = chunk.choices[0].delta.content
        print(content,end='')

如果是openai<1.0的版本

python 复制代码
import openai
openai.api_base = "xxxxxxx"
openai.api_key = "xxxxxxx"
iterator = openai.ChatCompletion.create(
        messages=prompt,
        model=model,
        stream=if_stream,
)

以上,End!

相关推荐
飞天小蜈蚣4 分钟前
django的模板渲染、for循环标签、继承模板
数据库·python·django
飞Link4 分钟前
【Anaconda】Linux(CentOS7)下安装Anaconda教程
linux·运维·python
web3.088899912 分钟前
小红书笔记评论API接口详情展示
开发语言·笔记·python
手抄二进制13 分钟前
使用Anaconda创建python环境并链接到Jupyter
开发语言·python·jupyter
知凡D26 分钟前
python脚本打包成exe后,对其引用的日历库实时更新-动态化加载模块
python·测试工具
kobe_OKOK_42 分钟前
快递鸟对接发快递后端设计系统
python·django
阿蔹42 分钟前
UI测试自动化-Web-Python-Selenium-2-元素操作、浏览器操作
前端·python·selenium·ui·自动化
Tipriest_1 小时前
配置用户pip源与查看当前的pip的源的办法
linux·人工智能·python·pip
雪域迷影2 小时前
使用Python库获取网页时报HTTP 403错误(禁止访问)的解决办法
开发语言·python·http·beautifulsoup·urllib
吃茄子的猫2 小时前
python中global全局变量
python