Claude Code 接入开源模型实战:SageMaker 部署 Kimi/GLM + LiteLLM 路由降本 70%
上个月团队全员推广 Claude Code,月底账单 Token 费用翻了 4 倍。
说白了,Claude Code 痛点就两个:代码安全 和烧钱。代码要发到云端 API,金融医疗行业不敢用;Token 用量指数级增长,成本曲线陡得吓人。
我折腾了两周,搞出一套方案------Amazon SageMaker 部署开源模型 + LiteLLM Proxy 智能路由,支线任务分流到私有化模型。实测性价比提升约 3.2 倍,代码不出 VPC。
先看任务拆解:60% Token 花在"杂活"上
Claude Code 执行任务时,会自动拆分成主线任务 和支线任务:
- 主线任务:代码重构、架构设计、复杂 bug 排查------需要深度推理
- 支线任务:会话标题生成、Bash 命令描述、Hook 条件评估------格式固定、逻辑简单
翻了一周调用日志,支线任务占 Token 消耗 60% 以上。用 Claude Sonnet 处理,大材小用。
整体架构
Claude Code → LiteLLM Proxy (Task Router)
├── 主线 → Amazon Bedrock (Claude Sonnet)
└── 支线 → Amazon SageMaker (Kimi/GLM)
LiteLLM Proxy 做统一网关,Task Router 根据 Prompt 特征自动判断。
支线任务在 VPC 内 SageMaker 处理,不出内网。主线走 Amazon Bedrock,有 VPC Endpoint 和 SOC2/ISO27001 认证。
第一步:SageMaker 部署
推理引擎 SGLang,原生支持 SageMaker Inference API。推荐 Kimi-K2.5 或 GLM-5。
bash
git clone https://github.com/ybalbert001/claude-code-aws-skills.git
cd claude-code-aws-skills/skills/sglang-deploy
python deploy.py \
--model-id kimi-k2.5 \
--instance-type ml.p5.48xlarge \
--endpoint-name kimi-endpoint \
--region us-east-1
踩坑:选了 ml.g5.12xlarge,直接 OOM。算好显存需求。
第二步:LiteLLM Proxy 配置
config.yaml:
yaml
# config.yaml
general_settings:
store_model_in_db: true
master_key: "sk-your-master-key"
router_settings:
timeout: 180
litellm_settings:
callbacks:
- "stream_anthropic_schema_fixer.hook"
- "dynamic_tagging_handler.proxy_handler_instance"
model_list:
- model_name: sagemaker-kimi-2-5
litellm_params:
model: sagemaker-chat/kimi-endpoint
aws_region_name: us-east-1
timeout: 180
max_tokens: 8192
drop_params: true
- model_name: bedrock-claude-sonnet46
litellm_params:
model: bedrock/anthropic.claude-sonnet-4-6-v1:0
aws_region_name: us-west-2
timeout: 300
callbacks 注册了两个 Hook------动态路由和 Schema 修复,后面细讲。
第三步:Docker Compose 启动
yaml
# docker-compose.yml
services:
litellm:
image: ghcr.io/berriai/litellm:v1.82.3-stable
restart: always
volumes:
- ./config.yaml:/app/config.yaml
- ./stream_anthropic_schema_fixer.py:/app/stream_anthropic_schema_fixer.py:ro
- ./dynamic_tagging_handler.py:/app/dynamic_tagging_handler.py:ro
command:
- "--config=/app/config.yaml"
ports:
- "8080:4000"
environment:
DATABASE_URL: "postgresql://llmproxy:dbpassword9090@db:5432/litellm"
STORE_MODEL_IN_DB: "True"
ENABLE_ANTHROPIC_SCHEMA_FIX: "true"
env_file:
- .env
depends_on:
- db
db:
image: postgres:16
restart: always
environment:
POSTGRES_USER: llmproxy
POSTGRES_PASSWORD: dbpassword9090
POSTGRES_DB: litellm
volumes:
- pgdata:/var/lib/postgresql/data
volumes:
pgdata:
bash
docker compose up -d
第四步:Claude Code 对接
bash
alias cc_proxy="ANTHROPIC_API_KEY=sk-your-litellm-key \
ANTHROPIC_BASE_URL=http://your-litellm-host:8080 \
ANTHROPIC_DEFAULT_SONNET_MODEL=bedrock-claude-sonnet46 \
ANTHROPIC_DEFAULT_HAIKU_MODEL=bedrock-claude-haiku45 \
CLAUDE_CODE_SUBAGENT_MODEL=bedrock-claude-sonnet45 \
claude"
跑 cc_proxy 启动,开发者完全无感。
第五步:动态路由 Hook------核心部分
LiteLLM Callback Handler 在 API 调用前拦截请求,动态修改目标模型。
python
# dynamic_tagging_handler.py
from litellm.integrations.custom_logger import CustomLogger
class DynamicRoutingHandler(CustomLogger):
def log_pre_api_call(self, kwargs, response_obj, start_time, end_time):
"""API 调用前拦截,根据任务类型动态路由"""
messages = kwargs.get("messages", [])
full_text = self._extract_all_text(messages)
task_model = self._detect_task_type(full_text)
if task_model:
print(f"[DynamicRouting] Routing to {task_model}")
kwargs["model"] = task_model
return kwargs
def _extract_all_text(self, messages):
"""提取消息中的所有文本"""
text_parts = []
for msg in messages:
content = msg.get("content", "")
if isinstance(content, str):
text_parts.append(content)
elif isinstance(content, list):
for block in content:
if block.get("type") == "text":
text_parts.append(block.get("text", ""))
return " ".join(text_parts)
def _detect_task_type(self, text):
"""综合判断任务类型,返回目标模型"""
if self._is_hook_evaluator(text):
return "sagemaker-kimi-2-5"
elif self._is_session_title_generator(text):
return "sagemaker-kimi-2-5"
elif self._is_bash_description_writer(text):
return "sagemaker-kimi-2-5"
elif len(text) > 10000:
return "bedrock-claude-sonnet46"
return None
def _is_hook_evaluator(self, text):
"""检测 Hook 条件评估任务"""
markers = [
"You are evaluating a hook in Claude Code",
"hook condition",
"Return your evaluation as a JSON object",
'"satisfied": true'
]
match_count = sum(1 for m in markers if m in text)
return match_count >= 3
def _is_session_title_generator(self, text):
return "Generate a short title" in text and "conversation" in text
def _is_bash_description_writer(self, text):
return "Describe what this bash command does" in text
proxy_handler_instance = DynamicRoutingHandler()
多特征阈值匹配避免误判,实测命中率 95% 以上。
第六步:Streaming Schema 修复
这步坑了我两天。Claude Code 流式解析器严格按 Anthropic Messages API 设计,开源模型返回数据丢字段直接报错。
解决方案是写 Schema 修复 Hook,逐 chunk 补字段:
python
# stream_anthropic_schema_fixer.py
from litellm.integrations.custom_logger import CustomLogger
from typing import AsyncGenerator, Optional, Dict, Any
class AnthropicSchemaFixerHook(CustomLogger):
async def async_post_call_streaming_iterator_hook(
self,
user_api_key_dict,
response: AsyncGenerator,
request_data: dict
) -> AsyncGenerator:
"""拦截流式响应,逐 chunk 修复 schema"""
last_usage = None
async for chunk in response:
if not isinstance(chunk, bytes):
yield chunk
continue
try:
decoded = chunk.decode("utf-8")
if not decoded.startswith("event:"):
yield chunk
continue
event_type, data_json = self._parse_sse(decoded)
modified = False
if event_type == "message_start":
modified = self._fix_message_start(data_json)
elif event_type == "message_delta":
modified, usage = self._fix_message_delta(data_json)
if usage:
last_usage = usage
elif event_type == "message_stop":
modified = self._fix_message_stop(
data_json, last_usage
)
if modified:
yield self._rebuild_sse(event_type, data_json)
else:
yield chunk
except Exception:
yield chunk
hook = AnthropicSchemaFixerHook()
核心思路:拦截 SSE → 解析 → 按事件类型补字段 → 重编码。修复后流式正常,不会 fallback(非流式下 SageMaker 容易超时)。
实际效果
跑了两周数据:
- 支线任务占比:约 60%-65% 路由到 SageMaker
- 成本降低:约 70%,性价比提升约 3.2 倍
- 代码安全:支线代码不出 VPC
- 开发者体验:完全无感
踩坑总结
- OOM:部署前算显存,实例宁大勿小
- Schema 不兼容:Claude Code 更新快,Hook 得跟着调
- 路由误判:多特征阈值(≥3),别用单特征
- 冷启动:配 provisioned concurrency
- 版本锁定 :LiteLLM 锁
v1.82.3-stable
完整代码和参考链接
- 代码仓库:https://github.com/ybalbert001/sglang-aws-kit/tree/main/customerize_litellm
- 部署工具:https://github.com/ybalbert001/claude-code-aws-skills/tree/main/skills/sglang-deploy
- 亚马逊云科技官博:https://aws.amazon.com/cn/blogs/china/claude-code-open-source-model-enterprise-practice/
- LiteLLM:https://docs.litellm.ai/
- SGLang:https://github.com/sgl-project/sglang
方案跑了两周,15 人团队没问题。Kimi-K2.5 和 GLM-5 处理支线很能打,未来能分流的任务只会更多。评论区交流。