从零搭一个支持流式输出、多模型切换、对话历史的 AI 聊天应用。后端用 Node.js (Express),前端用 React,核心 API 调用全部走 TheRouter,一个 Key 切换 Claude、GPT-4o、DeepSeek 任意模型。
项目结构
ai-chat/
├── server/ # Express 后端
│ ├── package.json
│ ├── index.js # 入口 + 路由
│ └── .env
├── client/ # React 前端
│ ├── package.json
│ ├── src/
│ │ ├── App.tsx
│ │ ├── components/
│ │ │ ├── ChatWindow.tsx # 消息列表
│ │ │ ├── MessageInput.tsx # 输入框
│ │ │ └── ModelSelector.tsx # 模型切换
│ │ └── hooks/
│ │ └── useChat.ts # 核心状态逻辑
│ └── index.html
└── README.md
后端:Express + 流式转发
初始化
bash
mkdir ai-chat && cd ai-chat
mkdir server && cd server
npm init -y
npm install express openai cors dotenv
server/.env:
THEROUTER_API_KEY=your-key-here
PORT=3001
核心路由
javascript
// server/index.js
import express from "express";
import cors from "cors";
import { OpenAI } from "openai";
import "dotenv/config";
const app = express();
app.use(cors({ origin: "http://localhost:5173" })); // Vite 默认端口
app.use(express.json());
// 通过 TheRouter 访问所有模型
const ai = new OpenAI({
apiKey: process.env.THEROUTER_API_KEY,
baseURL: "https://api.therouter.ai/v1",
});
// 支持的模型列表(前端下拉框用这个)
const MODELS = [
{ id: "anthropic/claude-sonnet-4", label: "Claude Sonnet 4", badge: "推荐" },
{ id: "anthropic/claude-haiku-3-5", label: "Claude Haiku 3.5", badge: "快速" },
{ id: "openai/gpt-4o", label: "GPT-4o", badge: "" },
{ id: "openai/gpt-4o-mini", label: "GPT-4o Mini", badge: "省钱" },
{ id: "deepseek/deepseek-chat", label: "DeepSeek V3", badge: "国产" },
];
app.get("/api/models", (req, res) => {
res.json(models);
});
/**
* POST /api/chat
* Body: { model: string, messages: Array<{role, content}> }
* 返回 SSE 流
*/
app.post("/api/chat", async (req, res) => {
const { model, messages } = req.body;
if (!model || !Array.isArray(messages) || messages.length === 0) {
return res.status(400).json({ error: "model 和 messages 字段必填" });
}
// 设置 SSE 响应头
res.setHeader("Content-Type", "text/event-stream");
res.setHeader("Cache-Control", "no-cache");
res.setHeader("Connection", "keep-alive");
res.flushHeaders();
try {
const stream = await ai.chat.completions.create({
model,
messages,
stream: true,
max_tokens: 2048,
});
for await (const chunk of stream) {
const delta = chunk.choices[0]?.delta?.content;
if (delta) {
// SSE 格式:data: <json>\n\n
res.write(`data: ${JSON.stringify({ content: delta })}\n\n`);
}
if (chunk.choices[0]?.finish_reason === "stop") {
res.write(`data: [DONE]\n\n`);
break;
}
}
} catch (err) {
res.write(`data: ${JSON.stringify({ error: err.message })}\n\n`);
} finally {
res.end();
}
});
app.listen(process.env.PORT, () => {
console.log(`Server running on http://localhost:${process.env.PORT}`);
});
关键点:
- SSE(Server-Sent Events) 比 WebSocket 更简单,单向推送流数据刚好合适
- 用
openainpm 包的stream: true模式,得到 AsyncIterable,逐 chunk 转发给前端 res.write()不关闭连接,res.end()才关闭
前端:React + 逐字显示
初始化
bash
cd ../
npm create vite@latest client -- --template react-ts
cd client
npm install react-markdown react-syntax-highlighter @types/react-syntax-highlighter
核心 Hook:useChat
typescript
// client/src/hooks/useChat.ts
import { useState, useRef, useCallback } from "react";
export interface Message {
id: string;
role: "user" | "assistant";
content: string;
model?: string;
}
export function useChat() {
const [messages, setMessages] = useState<Message[]>([]);
const [isStreaming, setIsStreaming] = useState(false);
const [selectedModel, setSelectedModel] = useState("anthropic/claude-sonnet-4");
const abortRef = useRef<AbortController | null>(null);
const sendMessage = useCallback(
async (userText: string) => {
if (!userText.trim() || isStreaming) return;
// 追加用户消息
const userMsg: Message = {
id: crypto.randomUUID(),
role: "user",
content: userText,
};
// 准备助手消息占位(流式填充)
const assistantMsg: Message = {
id: crypto.randomUUID(),
role: "assistant",
content: "",
model: selectedModel,
};
setMessages((prev) => [...prev, userMsg, assistantMsg]);
setIsStreaming(true);
// 构造发送给后端的历史(不含刚创建的空 assistant 消息)
const history = [...messages, userMsg].map(({ role, content }) => ({ role, content }));
abortRef.current = new AbortController();
try {
const resp = await fetch("http://localhost:3001/api/chat", {
method: "POST",
headers: { "Content-Type": "application/json" },
body: JSON.stringify({ model: selectedModel, messages: history }),
signal: abortRef.current.signal,
});
if (!resp.ok) throw new Error(`HTTP ${resp.status}`);
const reader = resp.body!.getReader();
const decoder = new TextDecoder();
while (true) {
const { done, value } = await reader.read();
if (done) break;
const lines = decoder.decode(value).split("\n");
for (const line of lines) {
if (!line.startsWith("data: ")) continue;
const payload = line.slice(6);
if (payload === "[DONE]") break;
try {
const { content, error } = JSON.parse(payload);
if (error) throw new Error(error);
if (content) {
// 逐字追加到 assistant 消息
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsg.id
? { ...m, content: m.content + content }
: m
)
);
}
} catch {
// 忽略解析错误(可能是不完整的 chunk)
}
}
}
} catch (err: any) {
if (err.name !== "AbortError") {
setMessages((prev) =>
prev.map((m) =>
m.id === assistantMsg.id
? { ...m, content: "请求失败:" + err.message }
: m
)
);
}
} finally {
setIsStreaming(false);
}
},
[messages, selectedModel, isStreaming]
);
const stopStreaming = () => {
abortRef.current?.abort();
setIsStreaming(false);
};
const clearHistory = () => setMessages([]);
return { messages, isStreaming, selectedModel, setSelectedModel, sendMessage, stopStreaming, clearHistory };
}
模型选择器
typescript
// client/src/components/ModelSelector.tsx
interface Model {
id: string;
label: string;
badge: string;
}
interface Props {
models: Model[];
selected: string;
onChange: (id: string) => void;
disabled: boolean;
}
export function ModelSelector({ models, selected, onChange, disabled }: Props) {
return (
<select
value={selected}
onChange={(e) => onChange(e.target.value)}
disabled={disabled}
className="model-selector"
>
{models.map((m) => (
<option key={m.id} value={m.id}>
{m.label}{m.badge ? ` · ${m.badge}` : ""}
</option>
))}
</select>
);
}
消息列表:支持 Markdown + 代码高亮
typescript
// client/src/components/ChatWindow.tsx
import ReactMarkdown from "react-markdown";
import { Prism as SyntaxHighlighter } from "react-syntax-highlighter";
import { oneDark } from "react-syntax-highlighter/dist/esm/styles/prism";
import type { Message } from "../hooks/useChat";
interface Props {
messages: Message[];
isStreaming: boolean;
}
export function ChatWindow({ messages, isStreaming }: Props) {
return (
<div className="chat-window">
{messages.map((msg) => (
<div key={msg.id} className={`message message--${msg.role}`}>
<div className="message__meta">
{msg.role === "assistant" ? (msg.model ?? "AI") : "你"}
</div>
<div className="message__body">
{msg.role === "assistant" ? (
<ReactMarkdown
components={{
code({ node, inline, className, children, ...props }: any) {
const lang = /language-(\w+)/.exec(className || "")?.[1];
return !inline && lang ? (
<SyntaxHighlighter
style={oneDark}
language={lang}
PreTag="div"
{...props}
>
{String(children).replace(/\n$/, "")}
</SyntaxHighlighter>
) : (
<code className={className} {...props}>
{children}
</code>
);
},
}}
>
{msg.content}
</ReactMarkdown>
) : (
<p>{msg.content}</p>
)}
{/* 流式输出时显示光标 */}
{isStreaming && msg.role === "assistant" && msg === messages.at(-1) && (
<span className="cursor">▌</span>
)}
</div>
</div>
))}
</div>
);
}
输入框
typescript
// client/src/components/MessageInput.tsx
import { useState, useRef, KeyboardEvent } from "react";
interface Props {
onSend: (text: string) => void;
onStop: () => void;
isStreaming: boolean;
}
export function MessageInput({ onSend, onStop, isStreaming }: Props) {
const [text, setText] = useState("");
const textareaRef = useRef<HTMLTextAreaElement>(null);
const handleSend = () => {
if (!text.trim()) return;
onSend(text);
setText("");
// 重置高度
if (textareaRef.current) textareaRef.current.style.height = "auto";
};
const handleKeyDown = (e: KeyboardEvent<HTMLTextAreaElement>) => {
// Enter 发送,Shift+Enter 换行
if (e.key === "Enter" && !e.shiftKey) {
e.preventDefault();
handleSend();
}
};
const handleInput = () => {
const el = textareaRef.current;
if (el) {
el.style.height = "auto";
el.style.height = Math.min(el.scrollHeight, 200) + "px"; // 最高 200px 自动滚动
}
};
return (
<div className="input-area">
<textarea
ref={textareaRef}
value={text}
onChange={(e) => setText(e.target.value)}
onKeyDown={handleKeyDown}
onInput={handleInput}
placeholder="输入消息... (Enter 发送,Shift+Enter 换行)"
rows={1}
disabled={isStreaming}
/>
{isStreaming ? (
<button onClick={onStop} className="btn btn--stop">停止</button>
) : (
<button onClick={handleSend} disabled={!text.trim()} className="btn btn--send">
发送
</button>
)}
</div>
);
}
组装 App
typescript
// client/src/App.tsx
import { useEffect, useState } from "react";
import { useChat } from "./hooks/useChat";
import { ChatWindow } from "./components/ChatWindow";
import { MessageInput } from "./components/MessageInput";
import { ModelSelector } from "./components/ModelSelector";
import "./App.css";
export default function App() {
const { messages, isStreaming, selectedModel, setSelectedModel, sendMessage, stopStreaming, clearHistory } = useChat();
const [models, setModels] = useState([]);
useEffect(() => {
fetch("http://localhost:3001/api/models")
.then((r) => r.json())
.then(setModels);
}, []);
return (
<div className="app">
<header className="app-header">
<h1>AI Chat</h1>
<div className="header-controls">
<ModelSelector
models={models}
selected={selectedModel}
onChange={setSelectedModel}
disabled={isStreaming}
/>
<button onClick={clearHistory} disabled={isStreaming} className="btn btn--ghost">
清空对话
</button>
</div>
</header>
<main className="chat-container">
{messages.length === 0 ? (
<div className="empty-state">选择一个模型,开始对话</div>
) : (
<ChatWindow messages={messages} isStreaming={isStreaming} />
)}
</main>
<footer className="input-footer">
<MessageInput onSend={sendMessage} onStop={stopStreaming} isStreaming={isStreaming} />
</footer>
</div>
);
}
多模型切换的核心逻辑
所有魔法就在这一行:
javascript
// server/index.js
const ai = new OpenAI({
apiKey: process.env.THEROUTER_API_KEY,
baseURL: "https://api.therouter.ai/v1", // ← 指向 TheRouter,不是 OpenAI
});
前端传 model: "anthropic/claude-sonnet-4",后端直接透传给 TheRouter,TheRouter 负责路由到对应的上游服务商。切换模型不需要改任何代码,也不需要管理多套 API Key。
附加功能
对话历史持久化
typescript
// 在 useChat.ts 中加入 localStorage 持久化
useEffect(() => {
const saved = localStorage.getItem("chat-history");
if (saved) setMessages(JSON.parse(saved));
}, []);
useEffect(() => {
if (messages.length > 0) {
localStorage.setItem("chat-history", JSON.stringify(messages));
}
}, [messages]);
流式光标 CSS
css
/* App.css */
.cursor {
display: inline-block;
animation: blink 1s step-end infinite;
color: #6366f1;
}
@keyframes blink {
0%, 100% { opacity: 1; }
50% { opacity: 0; }
}
本地启动
bash
# 后端
cd server
node index.js
# 前端(另开终端)
cd client
npm run dev
打开 http://localhost:5173,就能看到聊天界面。
部署
前端 → Vercel
bash
cd client
npm run build
npx vercel --prod
在 Vercel 控制台设置环境变量 VITE_API_URL=https://your-backend.railway.app,把代码里的 localhost:3001 替换成该变量。
后端 → Railway
bash
cd server
# 在 Railway 控制台新建项目,连接 GitHub 仓库
# 设置环境变量 THEROUTER_API_KEY
Railway 会自动检测 package.json 并部署,免运维,有免费额度。
小结
整个应用的核心流程:
- 用户在前端选模型、输入消息
- React 通过
fetch发请求到 Express,连接保持打开 - Express 用
openaiSDK 调用 TheRouter,得到流式 chunks - SSE 把 chunks 实时推给前端
- React 逐字追加到消息,用
react-markdown渲染
代码量不大,但每个环节都是生产可用的写法------SSE 流、AbortController 取消、Textarea 自动扩高、Markdown 代码高亮,这些细节决定了用户体验的质感。
接下来可以继续扩展:系统提示词(System Prompt)自定义、图片上传(多模态)、对话导出、用户认证......核心骨架不用动,往上叠功能就行。
TheRouter 注册:therouter.ai,支持支付宝充值,按量计费,无月费。