大语言模型调用方式与函数调用(Function Calling):从基础到实战
引言
在当今AI技术飞速发展的时代,大语言模型(LLM)已成为各行各业智能化转型的核心驱动力。然而,单纯的语言生成能力往往无法满足复杂的业务需求,这时函数调用(Function Calling)技术就显得尤为重要。本文将深入探讨主流大语言模型的调用方式,特别是函数调用技术的原理与实践,帮助开发者构建更加智能和实用的AI应用。
什么是函数调用?
函数调用是大语言模型与外部世界交互的桥梁,它允许模型在生成文本的同时,识别需要调用外部工具或API的场景,并按照预定格式输出调用参数。这种技术让LLM不再仅仅是文本生成器,而是成为了能够执行复杂任务的智能代理。
OpenAI Function Calling 原理与实战
JSON Schema 定义工具参数结构
在OpenAI的Function Calling体系中,JSON Schema扮演着关键角色。它定义了函数参数的规范结构,确保模型输出的参数符合预期格式。
            
            
              python
              
              
            
          
          import openai
import json
from typing import List, Dict, Any
# 配置OpenAI客户端
client = openai.OpenAI(api_key="your-api-key")
# 定义工具的参数结构
def get_weather_tool():
    """定义天气查询工具的参数结构"""
    return {
        "type": "function",
        "function": {
            "name": "get_weather",
            "description": "获取指定城市的天气信息",
            "parameters": {
                "type": "object",
                "properties": {
                    "location": {
                        "type": "string",
                        "description": "城市名称,如:北京、上海"
                    },
                    "unit": {
                        "type": "string",
                        "enum": ["celsius", "fahrenheit"],
                        "description": "温度单位,摄氏度或华氏度"
                    }
                },
                "required": ["location"],
                "additionalProperties": False
            }
        }
    }
def search_database_tool():
    """定义数据库查询工具的参数结构"""
    return {
        "type": "function",
        "function": {
            "name": "search_database",
            "description": "在数据库中查询相关信息",
            "parameters": {
                "type": "object",
                "properties": {
                    "query": {
                        "type": "string",
                        "description": "搜索查询语句"
                    },
                    "table": {
                        "type": "string",
                        "description": "要查询的数据表名称"
                    },
                    "limit": {
                        "type": "integer",
                        "description": "返回结果数量限制",
                        "default": 10
                    }
                },
                "required": ["query", "table"],
                "additionalProperties": False
            }
        }
    }
        Tool Call + Response Chain 流程
函数调用的核心流程可以概括为以下几个步骤:
是 否 用户输入 LLM处理 是否需要调用工具? 生成工具调用参数 直接生成回复 执行工具调用 获取工具执行结果 将结果返回给LLM 生成最终回复 返回回复给用户
下面是完整的实现代码:
            
            
              python
              
              
            
          
          class FunctionCallingAgent:
    def __init__(self, api_key: str):
        """
        初始化函数调用代理
        
        Args:
            api_key: OpenAI API密钥
        """
        self.client = openai.OpenAI(api_key=api_key)
        self.conversation_history = []
        
    def add_message(self, role: str, content: str):
        """
        添加对话消息到历史记录
        
        Args:
            role: 消息角色(user/assistant)
            content: 消息内容
        """
        self.conversation_history.append({"role": role, "content": content})
    
    def get_weather(self, location: str, unit: str = "celsius") -> str:
        """
        模拟天气查询函数
        
        Args:
            location: 城市名称
            unit: 温度单位
            
        Returns:
            str: 天气信息
        """
        # 在实际应用中,这里会调用真实的天气API
        # 这里使用模拟数据
        weather_data = {
            "北京": {"temperature": 25, "condition": "晴朗", "humidity": 40},
            "上海": {"temperature": 28, "condition": "多云", "humidity": 65},
            "广州": {"temperature": 32, "condition": "阵雨", "humidity": 80}
        }
        
        if location in weather_data:
            data = weather_data[location]
            temp = data["temperature"]
            if unit == "fahrenheit":
                temp = temp * 9/5 + 32
            return f"{location}的天气:温度{temp}°{unit[0].upper()},{data['condition']},湿度{data['humidity']}%"
        else:
            return f"未找到{city}的天气信息"
    
    def search_database(self, query: str, table: str, limit: int = 10) -> str:
        """
        模拟数据库查询函数
        
        Args:
            query: 查询语句
            table: 数据表名称
            limit: 结果数量限制
            
        Returns:
            str: 查询结果
        """
        # 模拟数据库查询结果
        sample_data = {
            "users": [
                {"id": 1, "name": "张三", "email": "zhangsan@example.com"},
                {"id": 2, "name": "李四", "email": "lisi@example.com"},
                {"id": 3, "name": "王五", "email": "wangwu@example.com"}
            ],
            "products": [
                {"id": 1, "name": "笔记本电脑", "price": 5999},
                {"id": 2, "name": "智能手机", "price": 3999},
                {"id": 3, "name": "平板电脑", "price": 2999}
            ]
        }
        
        if table in sample_data:
            results = sample_data[table][:limit]
            return f"在{table}表中查询'{query}'的结果:{json.dumps(results, ensure_ascii=False)}"
        else:
            return f"未找到表{table}"
    
    def execute_tool_call(self, tool_name: str, arguments: Dict[str, Any]) -> str:
        """
        执行工具调用
        
        Args:
            tool_name: 工具名称
            arguments: 工具参数
            
        Returns:
            str: 工具执行结果
        """
        try:
            if tool_name == "get_weather":
                return self.get_weather(**arguments)
            elif tool_name == "search_database":
                return self.search_database(**arguments)
            else:
                return f"未知工具:{tool_name}"
        except Exception as e:
            return f"工具执行错误:{str(e)}"
    
    def process_user_input(self, user_input: str) -> str:
        """
        处理用户输入,可能涉及函数调用
        
        Args:
            user_input: 用户输入文本
            
        Returns:
            str: 最终回复
        """
        # 添加用户消息到历史记录
        self.add_message("user", user_input)
        
        # 准备可用的工具
        tools = [get_weather_tool(), search_database_tool()]
        
        # 第一次调用:判断是否需要调用工具
        first_response = self.client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=self.conversation_history,
            tools=tools,
            tool_choice="auto"
        )
        
        response_message = first_response.choices[0].message
        tool_calls = response_message.tool_calls
        
        # 添加助手的第一轮回复到历史记录
        self.conversation_history.append({
            "role": "assistant",
            "content": response_message.content,
            "tool_calls": [{
                "id": tool_call.id,
                "type": "function",
                "function": {
                    "name": tool_call.function.name,
                    "arguments": tool_call.function.arguments
                }
            } for tool_call in tool_calls] if tool_calls else None
        })
        
        # 如果有工具调用,执行工具并获取结果
        if tool_calls:
            for tool_call in tool_calls:
                function_name = tool_call.function.name
                function_args = json.loads(tool_call.function.arguments)
                
                # 执行工具调用
                tool_result = self.execute_tool_call(function_name, function_args)
                
                # 添加工具执行结果到历史记录
                self.conversation_history.append({
                    "role": "tool",
                    "tool_call_id": tool_call.id,
                    "content": tool_result
                })
            
            # 第二次调用:基于工具执行结果生成最终回复
            second_response = self.client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=self.conversation_history
            )
            
            final_message = second_response.choices[0].message.content
            self.add_message("assistant", final_message)
            return final_message
        else:
            # 没有工具调用,直接返回回复
            final_message = response_message.content
            self.add_message("assistant", final_message)
            return final_message
# 使用示例
def main():
    # 初始化代理(在实际使用中替换为真实的API密钥)
    agent = FunctionCallingAgent(api_key="your-api-key-here")
    
    # 测试用例
    test_cases = [
        "今天北京的天气怎么样?",
        "查询用户表中的所有用户",
        "帮我查一下上海的温度,用华氏度显示",
        "产品表中有哪些商品?"
    ]
    
    for case in test_cases:
        print(f"用户: {case}")
        response = agent.process_user_input(case)
        print(f"助手: {response}")
        print("-" * 50)
if __name__ == "__main__":
    main()
        高级功能:支持多个工具同时调用
在实际应用中,用户的一个请求可能需要调用多个工具。下面是支持并行工具调用的增强版本:
            
            
              python
              
              
            
          
          class AdvancedFunctionCallingAgent(FunctionCallingAgent):
    def __init__(self, api_key: str):
        super().__init__(api_key)
        # 注册更多工具
        self.available_tools = {
            "get_weather": {
                "function": self.get_weather,
                "schema": get_weather_tool()
            },
            "search_database": {
                "function": self.search_database,
                "schema": search_database_tool()
            },
            "calculate": {
                "function": self.calculate,
                "schema": self.get_calculate_tool_schema()
            }
        }
    
    def get_calculate_tool_schema(self):
        """定义计算工具的参数结构"""
        return {
            "type": "function",
            "function": {
                "name": "calculate",
                "description": "执行数学计算",
                "parameters": {
                    "type": "object",
                    "properties": {
                        "expression": {
                            "type": "string",
                            "description": "数学表达式,如:2 + 3 * 4"
                        }
                    },
                    "required": ["expression"],
                    "additionalProperties": False
                }
            }
        }
    
    def calculate(self, expression: str) -> str:
        """
        执行数学计算
        
        Args:
            expression: 数学表达式
            
        Returns:
            str: 计算结果
        """
        try:
            # 安全地执行数学表达式
            result = eval(expression)
            return f"计算结果:{expression} = {result}"
        except Exception as e:
            return f"计算错误:{str(e)}"
    
    def process_complex_request(self, user_input: str) -> str:
        """
        处理复杂请求,支持多个工具调用
        
        Args:
            user_input: 用户输入
            
        Returns:
            str: 最终回复
        """
        self.add_message("user", user_input)
        
        # 获取所有可用工具的模式
        tools = [tool["schema"] for tool in self.available_tools.values()]
        
        # 第一次调用
        first_response = self.client.chat.completions.create(
            model="gpt-3.5-turbo",
            messages=self.conversation_history,
            tools=tools,
            tool_choice="auto"
        )
        
        response_message = first_response.choices[0].message
        tool_calls = response_message.tool_calls
        
        # 记录助手回复
        assistant_message = {
            "role": "assistant",
            "content": response_message.content
        }
        
        if tool_calls:
            assistant_message["tool_calls"] = []
            for tool_call in tool_calls:
                assistant_message["tool_calls"].append({
                    "id": tool_call.id,
                    "type": "function",
                    "function": {
                        "name": tool_call.function.name,
                        "arguments": tool_call.function.arguments
                    }
                })
        
        self.conversation_history.append(assistant_message)
        
        # 执行所有工具调用
        if tool_calls:
            tool_results = []
            for tool_call in tool_calls:
                function_name = tool_call.function.name
                function_args = json.loads(tool_call.function.arguments)
                
                if function_name in self.available_tools:
                    tool_function = self.available_tools[function_name]["function"]
                    try:
                        result = tool_function(**function_args)
                        tool_results.append({
                            "tool_call_id": tool_call.id,
                            "result": result
                        })
                    except Exception as e:
                        tool_results.append({
                            "tool_call_id": tool_call.id,
                            "result": f"错误:{str(e)}"
                        })
                else:
                    tool_results.append({
                        "tool_call_id": tool_call.id,
                        "result": f"未知工具:{function_name}"
                    })
            
            # 添加工具执行结果
            for result in tool_results:
                self.conversation_history.append({
                    "role": "tool",
                    "tool_call_id": result["tool_call_id"],
                    "content": result["result"]
                })
            
            # 第二次调用生成最终回复
            second_response = self.client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=self.conversation_history
            )
            
            final_message = second_response.choices[0].message.content
            self.add_message("assistant", final_message)
            return final_message
        else:
            final_message = response_message.content
            self.add_message("assistant", final_message)
            return final_message
# 测试复杂请求
def test_advanced_agent():
    agent = AdvancedFunctionCallingAgent(api_key="your-api-key-here")
    
    complex_requests = [
        "先查一下北京的天气,然后计算一下(25 + 15) * 2等于多少",
        "查询用户表的前2个用户,然后告诉我今天上海的天气"
    ]
    
    for request in complex_requests:
        print(f"用户: {request}")
        response = agent.process_complex_request(request)
        print(f"助手: {response}")
        print("=" * 80)
if __name__ == "__main__":
    test_advanced_agent()
        本地/私有化部署调用方式对比
除了OpenAI的API服务,在实际企业应用中,我们经常需要在本地或私有化环境中部署大语言模型。以下是主流方案的对比:
HuggingFace Transformers
            
            
              python
              
              
            
          
          from transformers import AutoTokenizer, AutoModelForCausalLM
import torch
class HuggingFaceLocalModel:
    def __init__(self, model_name: str):
        """
        初始化本地HuggingFace模型
        
        Args:
            model_name: 模型名称或路径
        """
        self.device = "cuda" if torch.cuda.is_available() else "cpu"
        print(f"使用设备: {self.device}")
        
        # 加载tokenizer和模型
        self.tokenizer = AutoTokenizer.from_pretrained(model_name)
        self.model = AutoModelForCausalLM.from_pretrained(
            model_name,
            torch_dtype=torch.float16 if self.device == "cuda" else torch.float32,
            device_map="auto"
        )
        
        # 如果tokenizer没有pad_token,设置为eos_token
        if self.tokenizer.pad_token is None:
            self.tokenizer.pad_token = self.tokenizer.eos_token
    
    def generate_response(self, prompt: str, max_length: int = 512) -> str:
        """
        生成回复
        
        Args:
            prompt: 输入提示
            max_length: 最大生成长度
            
        Returns:
            str: 生成的回复
        """
        # 编码输入
        inputs = self.tokenizer.encode(prompt, return_tensors="pt").to(self.device)
        
        # 生成回复
        with torch.no_grad():
            outputs = self.model.generate(
                inputs,
                max_length=max_length,
                num_return_sequences=1,
                temperature=0.7,
                do_sample=True,
                pad_token_id=self.tokenizer.eos_token_id
            )
        
        # 解码回复
        response = self.tokenizer.decode(outputs[0], skip_special_tokens=True)
        return response[len(prompt):]  # 返回新生成的部分
# 使用示例
def test_huggingface_model():
    # 使用一个较小的模型进行测试
    model = HuggingFaceLocalModel("microsoft/DialoGPT-medium")
    
    prompt = "你好,请问今天天气怎么样?"
    response = model.generate_response(prompt)
    print(f"提示: {prompt}")
    print(f"回复: {response}")
if __name__ == "__main__":
    test_huggingface_model()
        vLLM 高性能推理引擎
vLLM是一个专门为LLM设计的高性能推理引擎,特别适合生产环境部署:
            
            
              python
              
              
            
          
          from vllm import LLM, SamplingParams
import asyncio
class VLLMEngine:
    def __init__(self, model_name: str):
        """
        初始化vLLM引擎
        
        Args:
            model_name: 模型名称或路径
        """
        self.llm = LLM(
            model=model_name,
            tensor_parallel_size=1,  # GPU数量
            gpu_memory_utilization=0.9,
            max_model_len=2048
        )
        
    def batch_generate(self, prompts: list, max_tokens: int = 256) -> list:
        """
        批量生成回复
        
        Args:
            prompts: 提示列表
            max_tokens: 最大token数
            
        Returns:
            list: 回复列表
        """
        sampling_params = SamplingParams(
            temperature=0.7,
            top_p=0.9,
            max_tokens=max_tokens,
            stop_token_ids=[],  # 可以设置停止token
        )
        
        outputs = self.llm.generate(prompts, sampling_params)
        
        responses = []
        for output in outputs:
            generated_text = output.outputs[0].text
            responses.append(generated_text)
        
        return responses
# 使用示例
def test_vllm_engine():
    # 注意:需要先安装vLLM并确保有合适的GPU
    # pip install vllm
    engine = VLLMEngine("facebook/opt-125m")  # 使用小模型测试
    
    prompts = [
        "解释一下人工智能的概念:",
        "如何学习编程?",
        "今天的天气真好,"
    ]
    
    responses = engine.batch_generate(prompts)
    
    for prompt, response in zip(prompts, responses):
        print(f"提示: {prompt}")
        print(f"回复: {response}")
        print("-" * 50)
if __name__ == "__main__":
    test_vllm_engine()
        Ollama 本地模型部署
Ollama是一个简单易用的本地LLM部署工具,支持多种开源模型:
            
            
              python
              
              
            
          
          import requests
import json
class OllamaClient:
    def __init__(self, base_url: str = "http://localhost:11434"):
        """
        初始化Ollama客户端
        
        Args:
            base_url: Ollama服务地址
        """
        self.base_url = base_url
        self.session = requests.Session()
    
    def list_models(self) -> list:
        """获取可用的模型列表"""
        try:
            response = self.session.get(f"{self.base_url}/api/tags")
            if response.status_code == 200:
                data = response.json()
                return data.get("models", [])
            else:
                print(f"获取模型列表失败: {response.status_code}")
                return []
        except Exception as e:
            print(f"连接Ollama服务失败: {e}")
            return []
    
    def generate_response(self, model: str, prompt: str, 
                         system_prompt: str = None, **kwargs) -> str:
        """
        生成回复
        
        Args:
            model: 模型名称
            prompt: 用户提示
            system_prompt: 系统提示
            **kwargs: 其他参数
            
        Returns:
            str: 生成的回复
        """
        data = {
            "model": model,
            "prompt": prompt,
            "stream": False,
            "options": {
                "temperature": kwargs.get("temperature", 0.7),
                "top_p": kwargs.get("top_p", 0.9),
                "top_k": kwargs.get("top_k", 40),
                "num_predict": kwargs.get("max_tokens", 512)
            }
        }
        
        if system_prompt:
            data["system"] = system_prompt
        
        try:
            response = self.session.post(
                f"{self.base_url}/api/generate",
                json=data,
                timeout=60
            )
            
            if response.status_code == 200:
                result = response.json()
                return result.get("response", "")
            else:
                return f"请求失败: {response.status_code}"
                
        except Exception as e:
            return f"生成回复时出错: {str(e)}"
    
    def chat_completion(self, model: str, messages: list, **kwargs) -> str:
        """
        聊天补全接口(类似OpenAI格式)
        
        Args:
            model: 模型名称
            messages: 消息列表
            **kwargs: 其他参数
            
        Returns:
            str: 助手回复
        """
        # 将消息列表转换为prompt
        prompt = ""
        for msg in messages:
            role = msg["role"]
            content = msg["content"]
            if role == "system":
                prompt = f"系统: {content}\n\n" + prompt
            elif role == "user":
                prompt += f"用户: {content}\n"
            elif role == "assistant":
                prompt += f"助手: {content}\n"
        
        prompt += "助手: "
        
        return self.generate_response(model, prompt, **kwargs)
# 使用示例
def test_ollama_client():
    client = OllamaClient()
    
    # 检查可用模型
    models = client.list_models()
    print("可用模型:", [model["name"] for model in models])
    
    if models:
        # 使用第一个模型进行测试
        model_name = models[0]["name"]
        
        # 简单生成
        response = client.generate_response(
            model_name,
            "请用中文解释机器学习的概念"
        )
        print(f"回复: {response}")
        
        # 聊天格式
        messages = [
            {"role": "system", "content": "你是一个有帮助的助手"},
            {"role": "user", "content": "你好,请介绍一下你自己"}
        ]
        
        chat_response = client.chat_completion(model_name, messages)
        print(f"聊天回复: {chat_response}")
if __name__ == "__main__":
    test_ollama_client()
        部署方案对比分析
为了帮助读者选择合适的技术方案,我们提供一个详细的对比分析:
| 特性 | OpenAI API | HuggingFace Transformers | vLLM | Ollama | TGI | 
|---|---|---|---|---|---|
| 部署复杂度 | 无需部署 | 中等 | 中等 | 简单 | 中等 | 
| 推理速度 | 快(云端) | 中等 | 很快 | 快 | 很快 | 
| 内存效率 | 无需关心 | 较低 | 高 | 中等 | 高 | 
| 支持模型 | 有限 | 丰富 | 主流模型 | 精选模型 | 主流模型 | 
| 成本 | 按使用付费 | 免费(自备硬件) | 免费(自备硬件) | 免费(自备硬件) | 免费(自备硬件) | 
| 生产就绪 | 是 | 需要优化 | 是 | 适合开发 | 是 | 
| 函数调用支持 | 原生支持 | 需要自定义 | 需要自定义 | 有限支持 | 需要自定义 | 
实战:构建完整的函数调用系统
下面我们构建一个完整的函数调用系统,集成多种工具和错误处理机制:
            
            
              python
              
              
            
          
          import asyncio
from datetime import datetime
from typing import Dict, List, Any, Callable
import logging
# 配置日志
logging.basicConfig(level=logging.INFO)
logger = logging.getLogger(__name__)
class ToolRegistry:
    """工具注册表,管理所有可用工具"""
    
    def __init__(self):
        self.tools: Dict[str, Dict] = {}
    
    def register_tool(self, name: str, description: str, parameters: Dict, 
                     function: Callable) -> None:
        """
        注册工具
        
        Args:
            name: 工具名称
            description: 工具描述
            parameters: 参数定义
            function: 工具函数
        """
        self.tools[name] = {
            "description": description,
            "parameters": parameters,
            "function": function
        }
    
    def get_tool_schema(self, name: str) -> Dict:
        """获取工具的模式定义"""
        if name not in self.tools:
            raise ValueError(f"工具未注册: {name}")
        
        tool = self.tools[name]
        return {
            "type": "function",
            "function": {
                "name": name,
                "description": tool["description"],
                "parameters": {
                    "type": "object",
                    "properties": tool["parameters"],
                    "required": list(tool["parameters"].keys()),
                    "additionalProperties": False
                }
            }
        }
    
    def get_all_schemas(self) -> List[Dict]:
        """获取所有工具的模式"""
        return [self.get_tool_schema(name) for name in self.tools.keys()]
    
    def execute_tool(self, name: str, arguments: Dict) -> Any:
        """执行工具"""
        if name not in self.tools:
            raise ValueError(f"工具未注册: {name}")
        
        tool = self.tools[name]
        return tool["function"](**arguments)
class RobustFunctionCallingAgent:
    """健壮的函数调用代理"""
    
    def __init__(self, api_key: str, tool_registry: ToolRegistry):
        self.client = openai.OpenAI(api_key=api_key)
        self.tool_registry = tool_registry
        self.conversation_history = []
        self.max_retries = 3
        
    async def process_with_retry(self, user_input: str) -> str:
        """
        带重试的处理流程
        
        Args:
            user_input: 用户输入
            
        Returns:
            str: 最终回复
        """
        for attempt in range(self.max_retries):
            try:
                return await self._process_single_attempt(user_input)
            except Exception as e:
                logger.error(f"第{attempt + 1}次尝试失败: {str(e)}")
                if attempt == self.max_retries - 1:
                    return "抱歉,处理您的请求时出现了问题,请稍后重试。"
                await asyncio.sleep(1)  # 等待后重试
    
    async def _process_single_attempt(self, user_input: str) -> str:
        """单次处理尝试"""
        # 添加用户消息
        self.conversation_history.append({"role": "user", "content": user_input})
        
        # 获取工具模式
        tools = self.tool_registry.get_all_schemas()
        
        # 第一次LLM调用
        first_response = await asyncio.get_event_loop().run_in_executor(
            None,
            lambda: self.client.chat.completions.create(
                model="gpt-3.5-turbo",
                messages=self.conversation_history,
                tools=tools,
                tool_choice="auto"
            )
        )
        
        response_message = first_response.choices[0].message
        tool_calls = response_message.tool_calls
        
        # 记录助手回复
        assistant_msg = {
            "role": "assistant",
            "content": response_message.content or ""
        }
        
        if tool_calls:
            assistant_msg["tool_calls"] = []
            for tool_call in tool_calls:
                assistant_msg["tool_calls"].append({
                    "id": tool_call.id,
                    "type": "function",
                    "function": {
                        "name": tool_call.function.name,
                        "arguments": tool_call.function.arguments
                    }
                })
        
        self.conversation_history.append(assistant_msg)
        
        # 处理工具调用
        if tool_calls:
            tool_results = []
            for tool_call in tool_calls:
                try:
                    function_name = tool_call.function.name
                    function_args = json.loads(tool_call.function.arguments)
                    
                    # 执行工具
                    result = await asyncio.get_event_loop().run_in_executor(
                        None,
                        lambda: self.tool_registry.execute_tool(function_name, function_args)
                    )
                    
                    tool_results.append({
                        "tool_call_id": tool_call.id,
                        "content": str(result)
                    })
                    
                except Exception as e:
                    logger.error(f"工具执行失败: {str(e)}")
                    tool_results.append({
                        "tool_call_id": tool_call.id,
                        "content": f"工具执行错误: {str(e)}"
                    })
            
            # 添加工具结果
            for result in tool_results:
                self.conversation_history.append({
                    "role": "tool",
                    "tool_call_id": result["tool_call_id"],
                    "content": result["content"]
                })
            
            # 第二次LLM调用
            second_response = await asyncio.get_event_loop().run_in_executor(
                None,
                lambda: self.client.chat.completions.create(
                    model="gpt-3.5-turbo",
                    messages=self.conversation_history
                )
            )
            
            final_message = second_response.choices[0].message.content
            self.conversation_history.append({"role": "assistant", "content": final_message})
            return final_message
        
        else:
            final_message = response_message.content or "抱歉,我没有理解您的请求。"
            self.conversation_history.append({"role": "assistant", "content": final_message})
            return final_message
# 定义和注册工具
def setup_tool_registry() -> ToolRegistry:
    """设置工具注册表"""
    registry = ToolRegistry()
    
    # 注册天气查询工具
    registry.register_tool(
        name="get_weather",
        description="获取指定城市的天气信息",
        parameters={
            "location": {
                "type": "string",
                "description": "城市名称"
            },
            "unit": {
                "type": "string",
                "enum": ["celsius", "fahrenheit"],
                "description": "温度单位",
                "default": "celsius"
            }
        },
        function=lambda location, unit="celsius": f"{location}的天气:25°{unit[0].upper()},晴朗"  # 模拟实现
    )
    
    # 注册计算器工具
    registry.register_tool(
        name="calculate",
        description="执行数学计算",
        parameters={
            "expression": {
                "type": "string",
                "description": "数学表达式"
            }
        },
        function=lambda expression: f"计算结果: {eval(expression)}"  # 注意:生产环境需要更安全的实现
    )
    
    # 注册时间查询工具
    registry.register_tool(
        name="get_current_time",
        description="获取当前时间",
        parameters={},
        function=lambda: f"当前时间: {datetime.now().strftime('%Y-%m-%d %H:%M:%S')}"
    )
    
    return registry
# 异步测试函数
async def test_robust_agent():
    """测试健壮的代理"""
    registry = setup_tool_registry()
    agent = RobustFunctionCallingAgent(api_key="your-api-key-here", tool_registry=registry)
    
    test_cases = [
        "今天北京的天气怎么样?",
        "计算一下(15 + 25) * 3等于多少",
        "现在几点了?",
        "先查天气再计算时间"  # 复杂请求
    ]
    
    for case in test_cases:
        print(f"用户: {case}")
        response = await agent.process_with_retry(case)
        print(f"助手: {response}")
        print("=" * 60)
if __name__ == "__main__":
    # 运行异步测试
    asyncio.run(test_robust_agent())
        总结
本文详细介绍了大语言模型函数调用技术的原理与实践,涵盖了从基础的OpenAI Function Calling到本地部署的多种方案。通过完整的代码示例和详细的注释,读者可以:
- 理解函数调用的核心概念:掌握JSON Schema定义、Tool Call流程等关键技术
 - 掌握多种部署方案:了解OpenAI API、HuggingFace Transformers、vLLM、Ollama等方案的优缺点
 - 构建健壮的系统:学习错误处理、重试机制、工具注册等工程实践
 - 应对实际业务需求:通过完整的示例代码,快速上手开发
 
函数调用技术让大语言模型从单纯的文本生成器升级为能够执行复杂任务的智能代理,是构建实用AI应用的关键技术。随着技术的不断发展,这项技术将在更多场景中发挥重要作用。
在下一篇文章中,我们将深入探讨LangChain框架,学习如何利用其强大的组件构建更复杂的AI应用。