前言😀😀
昨天的文章,展示了项目的页面是如何制作的,那么显然接下来我们要实现的就是如何来对接到我们的大模型,我们要对接的大模型主要有两个:
- 对接moonshot
- 对接中转站
同时在对接中转站时要注意,我们将对接两个服务:
- chatgpt服务,完成翻译任务
- MJ绘图服务
所以接下来,我们将演示如何对接,在成功对接到我们的模型之后,我们就可以来基于模型来做一点处理了。当然,实际上,如果你本地也有模型,并且可以通过one_api进行本地化部署的话,那么就更酷了。当然,为了能够让更多的小伙伴能够正常运行这个项目,我们还是选择中转站。
那么,接下来就开始吧,我们要如何完成对接。同时在对接完成之后,我们怎么实现到我们的聊天机器人。之后在下一章节,我们将讨论,如何将这些内容与我们具体的业务服务相关联。
对接moonshot
moonshot的对接还是非常友好的,它提供了标准的openai接口格式。因此我们直接使用openai这个库就可以直接调用。你只需先下载即可,然后获取到key,并且选择到模型。 只需要这样就可以轻松完成对接:
python
def signChat(self,history):
history_openai_format = []
# 先加入系统信息
history_openai_format.append(
{"role": "system",
"content": Config.settings.get("system_xiaoxi")
},
)
# 再加入解析信息
history_openai_format.extend(history)
# print(history_openai_format)
completion = client.chat.completions.create(
model=Config.settings.get("default_model"),
messages=history_openai_format,
temperature=Config.settings.get("temperature"),
)
result = completion.choices[0].message.content
return result
在这里,我们先预设了角色:
python
"system_xiaoxi": "你是一个全能小助手,你的名字叫小汐,尤其擅长写作和故事改编。"
那么到这里,一个moonshot就对接好了。当然这还不够,为了方便使用,我们还是要进行简单封装的。
在这里的话,还是可以看到,这里还可以进行流式对话。
python
import time
from openai import OpenAI
from utils import Config
api_key = Config.settings.get("openai_api_key")
client = OpenAI(api_key=api_key,base_url=Config.settings.get("openai_api_base"))
class ChatBotHandler(object):
def __init__(self, bot_name="chat"):
self.bot_name = bot_name
self.current_message = None
def user_stream(self,user_message, history):
self.current_message = user_message
return "", history + [[user_message, None]]
def bot_stream(self,history):
if(len(history)==0):
history.append([self.current_message,None])
bot_message = self.getResponse(history[-1][0],history)
history[-1][1] = ""
for character in bot_message:
history[-1][1] += character
time.sleep(0.02)
yield history
def signChat(self,history):
history_openai_format = []
# 先加入系统信息
history_openai_format.append(
{"role": "system",
"content": Config.settings.get("system_xiaoxi")
},
)
# 再加入解析信息
history_openai_format.extend(history)
# print(history_openai_format)
completion = client.chat.completions.create(
model=Config.settings.get("default_model"),
messages=history_openai_format,
temperature=Config.settings.get("temperature"),
)
result = completion.choices[0].message.content
return result
def getResponse(self,message,history):
history_openai_format = []
for human, assistant in history:
# 基础对话的系统设置
history_openai_format.append(
{"role": "system",
"content":Config.settings.get("system_xiaoxi")
},
)
if(human!=None):
history_openai_format.append({"role": "user", "content": human})
if(assistant!=None):
history_openai_format.append({"role": "assistant", "content": assistant})
completion = client.chat.completions.create(
model=Config.settings.get("default_model"),
messages=history_openai_format,
temperature=Config.settings.get("temperature"),
)
result = completion.choices[0].message.content
return result
def chat(self,message, history):
history_openai_format = []
for human, assistant in history:
history_openai_format.append({"role": "user", "content": human})
history_openai_format.append({"role": "system", "content": assistant})
history_openai_format.append({"role": "user", "content": message})
response = client.chat.completions.create(model="moonshot-v1-8k",
messages=history_openai_format,
temperature=1.0,
stream=True)
partial_message = ""
for chunk in response:
if chunk.choices[0].delta.content is not None:
partial_message = partial_message + chunk.choices[0].delta.content
yield partial_message
对接中转站
之后就是对接中转站点。这里我们是对接两个服务,一个还是openai的服务,还有一个就是我们的绘图的服务。
对接chat
这里的对接,略有不同,但是总体上还是类似,这里我们需要使用到request,用比较原始的方式进行对接。
python
import requests
import json
config = getConfig()
class MyOpenAI():
def __init__(self):
self.url = "https://api.openai-hk.com/v1/chat/completions"
self.headers = {
"Content-Type": "application/json",
# 这里采用的是中转站的openai key
"Authorization": "Bearer "+config.get("image_api_key")
}
def chat(self,message,prompt,temperature=0.8):
data = {
"max_tokens": 1200,
"model": "gpt-3.5-turbo",
"temperature": temperature,
"top_p": 1,
"presence_penalty": 1,
"messages": [
{
"role": "system",
"content":prompt
},
{
"role": "user",
"content": message
}
]
}
response = requests.post(self.url, headers=self.headers, data=json.dumps(data).encode('utf-8'))
result = response.content.decode("utf-8")
result = json.loads(result)
result = result["choices"][0]["message"]["content"]
return result
当然,这里如何对接的话,也是有文档的,这里只是为了方便使用进行一个简单封装而已。
对接绘图
之后的话,我们要对接到我们的绘图api。 这里的话,对接绘图的话有三个步骤。
- 提交任务
- 查看任务进度,获取图片地址
- 访问图片地址拿到图片
当然同样,这个在对方的文档当中是可以看到的。这里也是进行了封装。
python
class Text2Image():
def __init__(self):
# 构建请求头
self.headers = {
'Authorization': f'Bearer {api_key}',
'Content-Type': 'application/json'
}
self.current_file_path = os.path.abspath(__file__)
self.current_dir = os.path.dirname(self.current_file_path)
self.resource_dir = self.current_dir+"/../resource"
def get_taskId(self,prompt):
def send(prompt,headers):
data = {
"base64Array": [],
"instanceId": "",
"modes": [],
"notifyHook": "https://ww.baidu.com/notifyHook/back",
"prompt": prompt,
"remix": True,
"state": ""
}
response = requests.post(
url='https://api.openai-hk.com/fast/mj/submit/imagine',
headers=headers,
data=json.dumps(data)
)
return response.json()
try:
result = send(prompt, self.headers)
except Exception as e:
result = {'code':-1}
if result.get('code') == 1:
return result.get("result")
else:
return None
#1713283471368561
def get_Image(self,task_id):
url = f'https://api.openai-hk.com/fast/mj/task/{task_id}/fetch'
# 发送GET请求
response = requests.get(url, headers=self.headers)
return response.json()
def __create_img_stream(self):
now = datetime.now()
year_month_day = now.strftime("%Y%m%d")
file_uuid = uuid.uuid4()
audio_stream = self.resource_dir + "/img" + "/" + year_month_day + "/"
if (not os.path.exists(audio_stream)):
os.makedirs(audio_stream)
audio_stream += file_uuid.hex + ".jpg"
return audio_stream
def __getImg(self,url):
response = requests.get(url)
response.raise_for_status()
image = Image.open(BytesIO(response.content))
width, height = image.size
quarter_width = width // 2
quarter_height = height // 2
# 裁剪左上角的四分之一图片
cropped_image = image.crop((0, 0, quarter_width, quarter_height))
return cropped_image
# 只要这个任务执行失败,那么我们就返回为空
def text2image(self,prompt):
# 先拿到task_id
task_id = self.get_taskId(prompt)
if(task_id):
return self.__text2image(prompt,task_id)
else:
return None
def __text2image(self,prompt,task_id):
res = self.get_Image(task_id)
# 执行失败
if(res.get("status")=="FAILURE"):
return None
if(res.get('progress') == "100%"):
return self.__getImg(res.get("imageUrl"))
else:
# 还在生成,等待一会再去重试呗,调用api生成还是比较慢的
time.sleep(2)
return self.__text2image(prompt,task_id)
这里可以给大家展示一下,当任务提交之后,去拿到图片,会返回给我们的数据格式是怎么样的:
json
response_data = {
'id': '1713283927285806',
'properties': {
'discordChannelId': '1222483390712774667',
'botType': 'MID_JOURNEY',
'notifyHook': 'https://www.open-hk.com/openai/mjapi/16158-567/https%3A%2F%2Fww.baidu.com%2FnotifyHook%2Fback',
'discordInstanceId': '1500442604632883200',
'flags': 0,
'messageId': '1229828231935426650',
'messageHash': 'b1290620-0d25-4882-a72d-102dc174fc22',
'nonce': '1501375981829570560',
'finalPrompt': 'a black cat',
'progressMessageId': '1229827516609331220',
'messageContent': '**a black cat** - <@1222482757389910027> (fast)'
},
'action': 'IMAGINE',
'status': 'SUCCESS',
'prompt': 'a black cat',
'promptEn': 'a black cat',
'description': '/imagine a black cat',
'submitTime': 1713283927285,
'startTime': 1713284128607,
'finishTime': 1713284300009,
'progress': '100%',
'imageUrl': 'https://proxy.xjai.top:33330/mjcdn/attachments/1222483390712774667/1229828230979129374/xizaizai0902_a_black_cat_b1290620-0d25-4882-a72d-102dc174fc22.png?ex=663119cb&is=661ea4cb&hm=1657fcc1bfd3f971fd2d9349ee8b5442a2b300f95e13ab72cee662d76e09789a&',
'failReason': None,
'state': '16158',
'buttons': [
{
'customId': 'MJ::JOB::upsample::1::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'U1',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::upsample::2::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'U2',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::upsample::3::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'U3',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::upsample::4::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'U4',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::reroll::0::b1290620-0d25-4882-a72d-102dc174fc22::SOLO',
'emoji': '🔄',
'label': '',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::variation::1::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'V1',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::variation::2::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'V2',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::variation::3::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'V3',
'type': 2,
'style': 2
},
{
'customId': 'MJ::JOB::variation::4::b1290620-0d25-4882-a72d-102dc174fc22',
'emoji': '',
'label': 'V4',
'type': 2,
'style': 2
}
]
}
对话助手实现😊
现在我们具备了对接大语言模型的能力,那么接下来我们要做的就是,将这个东西整合到我们的应用当中。 这一点不难,所以我们直接看到昨天对话部分的代码即可。
python
class AssistantNovel(object):
def __init__(self):
self.chat = ChatBotHandler()
def get_response(self,prompt, history):
return self.chat.signChat(history)
def clear_chat_history(self):
st.session_state.messages = [{"role": "assistant", "content": "🍭🍡你好!我是全能创作助手~小汐🥰,可以帮助您完善补充文案细节?🧐"}]
def page(self):
# 主聊天对话窗口
prompt = st.chat_input(placeholder="请输入对话")
if "messages" not in st.session_state.keys():
st.session_state.messages = [{"role": "assistant", "content": "🍭🍡你好!我是全能创作助手~小汐🥰,可以帮助您完善补充文案细节?🧐"}]
for message in st.session_state.messages:
with st.chat_message(message["role"]):
st.write(message["content"])
if prompt:
st.session_state.messages.append({"role": "user", "content": prompt})
with st.chat_message("user"):
st.write(prompt)
if st.session_state.messages[-1]["role"] != "assistant":
with st.chat_message("assistant"):
with st.spinner("Thinking..."):
try:
response = self.get_response(prompt,st.session_state.messages)
except Exception as e:
print(e)
response = "哦┗|`O′|┛ 嗷~~,出错了,请稍后再试!😥"
placeholder = st.empty()
full_response = ''
for item in response:
full_response += item
time.sleep(0.01)
placeholder.markdown(full_response)
placeholder.markdown(full_response)
message = {"role": "assistant", "content": full_response}
st.session_state.messages.append(message)
st.button('清空历史对话', on_click=self.clear_chat_history)
这里我们只是将
python
response = self.get_response(prompt,st.session_state.messages)
替换为了我们刚刚写好的接口,这样就完成了对接。