Spring Cloud + AI:微服务架构下的智能路由、故障自愈、日志分析

在云原生时代,微服务架构的复杂性带来了路由决策、故障恢复、日志排查三大痛点。将 AI 能力融入 Spring Cloud 生态,可以显著提升系统的自适应能力运维效率 。本文将围绕智能路由、故障自愈、智能日志分析三大场景,给出完整的架构设计与代码实现。


一、整体架构

智能路由
智能路由
智能路由
指标上报
指标上报
指标上报
实时指标
服务状态
路由权重
熔断指令
日志输出
日志输出
日志输出
异常日志
告警/报告
客户端请求
Spring Cloud Gateway

  • AI 路由策略
    服务 A
    服务 B
    服务 C
    Nacos 服务注册中心
    Prometheus + Grafana
    AI 决策引擎
    Resilience4j 熔断器
    Filebeat / Fluentd
    ELK Stack
    AI 日志分析模块
    运维人员

二、AI 增强的技术栈选型

20% 20% 15% 15% 15% 10% 5% Spring Cloud + AI 各模块技术占比 Spring Cloud Gateway Resilience4j 容错 Nacos 注册/配置 AI 推理服务 (Python) ELK 日志体系 Prometheus 监控 消息队列 (Kafka)

模块 技术选型 AI 增强点
服务网关 Spring Cloud Gateway 基于实时指标的智能路由
服务注册 Nacos AI 预测节点健康度
容错保护 Resilience4j AI 动态熔断阈值
配置中心 Nacos Config AI 推荐最优配置
日志采集 Filebeat + ELK AI 异常检测与根因分析
消息驱动 Kafka / RabbitMQ AI 事件关联分析
AI 服务 Python (FastAPI + vLLM) 提供 LLM 推理接口

三、智能路由:基于 AI 的动态流量调度

3.1 路由决策流程

响应时间 + 错误率 + 负载
AI 服务不可用
请求到达 Gateway
GlobalFilter 拦截
查询 AI 路由服务
计算最优节点
设置路由权重
加权随机选择实例
转发请求
记录路由决策日志
降级: RoundRobin 默认策略

3.2 自定义 AI 路由过滤器

java 复制代码
/**
 * AI 增强的动态路由过滤器
 * 根据后端服务实时指标,通过 AI 模型决策最优路由目标
 */
@Component
public class AiRoutingGlobalFilter implements GlobalFilter, Ordered {

    private final RestTemplate restTemplate;
    private final LoadBalancerClient loadBalancer;

    @Value("${ai.routing.service-url:http://localhost:5001/route/decide}")
    private String aiRouteServiceUrl;

    @Value("${ai.routing.enabled:true}")
    private boolean aiRoutingEnabled;

    public AiRoutingGlobalFilter(RestTemplate restTemplate,
                                  LoadBalancerClient loadBalancer) {
        this.restTemplate = restTemplate;
        this.loadBalancer = loadBalancer;
    }

    @Override
    public Mono<Void> filter(ServerWebExchange exchange, GatewayFilterChain chain) {
        if (!aiRoutingEnabled) {
            return chain.filter(exchange);
        }

        String serviceId = extractServiceId(exchange);

        try {
            // 1. 调用 AI 路由决策服务
            RouteDecisionRequest request = new RouteDecisionRequest(
                serviceId,
                exchange.getRequest().getPath().value(),
                extractClientInfo(exchange)
            );

            ResponseEntity<RouteDecisionResponse> response = restTemplate.postForEntity(
                aiRouteServiceUrl, request, RouteDecisionResponse.class
            );

            RouteDecisionResponse decision = response.getBody();
            if (decision != null && decision.getPreferredInstance() != null) {
                // 2. 将 AI 推荐的实例注入请求属性
                exchange.getAttributes().put(
                    "ai_preferred_instance",
                    decision.getPreferredInstance()
                );
                exchange.getAttributes().put(
                    "ai_route_confidence",
                    decision.getConfidence()
                );
            }
        } catch (Exception e) {
            // 降级:AI 服务不可用时使用默认负载均衡策略
            log.warn("AI 路由服务调用失败,降级为默认策略: {}", e.getMessage());
        }

        return chain.filter(exchange);
    }

    @Override
    public int getOrder() {
        return Ordered.HIGHEST_PRECEDENCE + 1000;
    }
}

3.3 AI 路由决策服务(Python 端)

python 复制代码
# ai_route_service.py
import time
from dataclasses import dataclass
from typing import Optional

import requests
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI(title="AI Routing Decision Service")

# Prometheus 指标查询地址
PROMETHEUS_URL = "http://localhost:9090/api/v1/query"


class RouteRequest(BaseModel):
    service_id: str
    request_path: str
    client_ip: str = ""


class RouteResponse(BaseModel):
    preferred_instance: Optional[str] = None
    confidence: float = 0.0
    reason: str = ""
    latency_ms: float = 0.0


def query_prometheus(query: str) -> dict:
    """从 Prometheus 查询实时指标。"""
    resp = requests.get(
        PROMETHEUS_URL,
        params={"query": query},
        timeout=5,
    )
    return resp.json().get("data", {}).get("result", [])


@app.post("/route/decide", response_model=RouteResponse)
async def decide_route(req: RouteRequest):
    """
    基于实时指标 + AI 模型决策最优路由实例。
    评估维度:响应时间、错误率、CPU 使用率、连接数。
    """
    start = time.perf_counter()

    # 查询各实例的 P99 响应时间
    latency_results = query_prometheus(
        f'histogram_quantile(0.99, '
        f'sum(rate(http_server_requests_seconds_bucket'
        f'{{service="{req.service_id}"}}[5m])) by (le, instance))'
    )

    # 查询各实例的错误率
    error_results = query_prometheus(
        f'sum(rate(http_server_requests_seconds_count'
        f'{{service="{req.service_id}",status=~"5.."}}[5m])) '
        f'by (instance) '
        f'/ sum(rate(http_server_requests_seconds_count'
        f'{{service="{req.service_id}"}}[5m])) by (instance)'
    )

    # 查询 CPU 使用率
    cpu_results = query_prometheus(
        f'process_cpu_usage{{service="{req.service_id}"}}'
    )

    # 综合评分:响应时间权重 0.4,错误率权重 0.35,CPU 权重 0.25
    scores = {}
    for item in latency_results:
        instance = item["metric"].get("instance", "")
        latency = float(item["value"][1])
        scores[instance] = scores.get(instance, 0) + latency * 0.4

    for item in error_results:
        instance = item["metric"].get("instance", "")
        error_rate = float(item["value"][1])
        scores[instance] = scores.get(instance, 0) + error_rate * 0.35

    for item in cpu_results:
        instance = item["metric"].get("instance", "")
        cpu = float(item["value"][1])
        scores[instance] = scores.get(instance, 0) + cpu * 0.25

    # 选择得分最低(最优)的实例
    if scores:
        best_instance = min(scores, key=scores.get)
        best_score = scores[best_instance]
        confidence = max(0.0, 1.0 - best_score)
    else:
        best_instance = None
        confidence = 0.0

    latency = (time.perf_counter() - start) * 1000

    return RouteResponse(
        preferred_instance=best_instance,
        confidence=round(confidence, 4),
        reason=f"综合评分: {scores}" if scores else "无可用指标",
        latency_ms=round(latency, 2),
    )

3.4 Gateway 路由配置

yaml 复制代码
# application.yml
server:
  port: 8080

spring:
  cloud:
    gateway:
      routes:
        - id: service-a
          uri: lb://service-a
          predicates:
            - Path=/api/a/**
          filters:
            - name: Retry
              args:
                retries: 3
                statuses: BAD_GATEWAY, GATEWAY_TIMEOUT
                backoff:
                  firstBackoff: 100ms
                  maxBackoff: 500ms

        - id: service-b
          uri: lb://service-b
          predicates:
            - Path=/api/b/**

# AI 路由配置
ai:
  routing:
    enabled: true
    service-url: http://localhost:5001/route/decide
    fallback-strategy: round-robin
    cache-ttl: 10s

四、故障自愈:AI 驱动的动态熔断与恢复

4.1 故障自愈流程

Closed
Open






服务调用
Resilience4j 熔断器
正常调用
快速失败 / 降级
异常率是否超阈值?
AI 分析异常模式
是否为系统性故障?
触发全局熔断
定向熔断问题实例
AI 推荐恢复策略
逐步放流验证
验证通过?
恢复全量流量
AI 降级响应生成

4.2 动态熔断配置管理

java 复制代码
/**
 * AI 驱动的动态熔断配置管理器
 * 根据 AI 分析结果动态调整 Resilience4j 熔断器参数
 */
@Component
@Slf4j
public class AiCircuitBreakerManager {

    private final CircuitBreakerRegistry circuitBreakerRegistry;
    private final RestTemplate restTemplate;

    @Value("${ai.circuit-breaker.service-url:http://localhost:5001/circuit-breaker/config}")
    private String aiConfigUrl;

    @Scheduled(fixedRate = 30_000) // 每 30 秒动态调整一次
    public void adjustCircuitBreakerConfig() {
        for (String name : circuitBreakerRegistry.getAllCircuitBreakers()
                .stream().map(CircuitBreaker::getName).toList()) {

            CircuitBreaker cb = circuitBreakerRegistry.circuitBreaker(name);
            CircuitBreaker.Metrics metrics = cb.getMetrics();

            // 收集当前指标
            CircuitBreakerMetrics snapshot = new CircuitBreakerMetrics(
                name,
                metrics.getFailureRate(),
                metrics.getSlowCallRate(),
                metrics.getNumberOfSuccessfulCalls(),
                metrics.getNumberOfFailedCalls(),
                metrics.getNumberOfBufferedCalls()
            );

            try {
                // 调用 AI 服务获取推荐的熔断配置
                ResponseEntity<AiCircuitBreakerConfig> response = restTemplate.postForEntity(
                    aiConfigUrl, snapshot, AiCircuitBreakerConfig.class
                );

                AiCircuitBreakerConfig recommended = response.getBody();
                if (recommended != null && recommended.confidence() > 0.8) {
                    applyConfig(name, recommended);
                }
            } catch (Exception e) {
                log.warn("AI 熔断配置服务调用失败: {}", e.getMessage());
            }
        }
    }

    private void applyConfig(String name, AiCircuitBreakerConfig config) {
        CircuitBreakerConfig newConfig = CircuitBreakerConfig.custom()
            .failureRateThreshold(config.failureRateThreshold())
            .slowCallRateThreshold(config.slowCallRateThreshold())
            .slowCallDurationThreshold(Duration.ofMillis(config.slowCallDurationMs()))
            .waitDurationInOpenState(Duration.ofMillis(config.waitDurationInOpenMs()))
            .permittedNumberOfCallsInHalfOpenState(config.permittedCallsInHalfOpen())
            .slidingWindowType(CircuitBreakerConfig.SlidingWindowType.COUNT_BASED)
            .slidingWindowSize(config.slidingWindowSize())
            .build();

        circuitBreakerRegistry.replace(name, newConfig);
        log.info("已更新熔断器 [{}] 配置: {}", name, config);
    }
}

4.3 AI 降级响应服务

java 复制代码
/**
 * AI 增强的服务降级处理器
 * 在熔断触发时调用 LLM 生成有意义的降级响应
 */
@Component
public class AiFallbackHandler {

    private final RestTemplate restTemplate;

    @Value("${ai.fallback.service-url:http://localhost:5001/fallback/generate}")
    private String fallbackServiceUrl;

    public ResponseEntity<String> handleFallback(String serviceName,
                                                  String path,
                                                  Throwable throwable) {
        FallbackRequest request = new FallbackRequest(
            serviceName, path, throwable.getClass().getSimpleName(),
            throwable.getMessage()
        );

        try {
            ResponseEntity<String> response = restTemplate.postForEntity(
                fallbackServiceUrl, request, String.class
            );
            return ResponseEntity.ok()
                .contentType(MediaType.APPLICATION_JSON)
                .body(response.getBody());
        } catch (Exception e) {
            // AI 服务也不可用,返回静态兜底
            return ResponseEntity.status(HttpStatus.SERVICE_UNAVAILABLE)
                .body("""
                    {
                      "code": 503,
                      "message": "服务暂时不可用,请稍后重试",
                      "service": "%s",
                      "timestamp": "%s"
                    }
                    """.formatted(serviceName, Instant.now()));
        }
    }
}

五、智能日志分析:AI 驱动的异常检测与根因定位

5.1 日志分析流水线

异常
正常
微服务日志输出
Filebeat 采集
Logstash 清洗/转换
Elasticsearch 存储
Kibana 可视化
AI 日志分析模块
异常检测
根因分析
生成分析报告
告警通知

钉钉/邮件/Slack
基线更新

5.2 日志异常检测(Python AI 模块)

python 复制代码
# ai_log_analyzer.py
import re
import time
from datetime import datetime, timezone
from typing import Optional

import requests
from elasticsearch import Elasticsearch
from fastapi import FastAPI
from pydantic import BaseModel

app = FastAPI(title="AI Log Analyzer")

es = Elasticsearch(["http://localhost:9200"])

# LLM 推理服务地址
LLM_API_URL = "http://localhost:8000/v1/chat"


class LogEntry(BaseModel):
    timestamp: str
    service: str
    level: str
    message: str
    trace_id: str = ""
    span_id: str = ""
    stack_trace: str = ""


class AnalysisResult(BaseModel):
    is_anomaly: bool
    severity: str  # critical, warning, info
    category: str  # OOM, NetworkError, Timeout, etc.
    root_cause: str
    suggestion: str
    confidence: float


# ---------- 已知异常模式规则库 ----------
ANOMALY_PATTERNS = [
    (r"OutOfMemoryError|OOM|heap space", "OOM", "critical"),
    (r"Connection refused|ConnectTimeoutException", "NetworkError", "critical"),
    (r"SocketTimeoutException|Read timed out", "Timeout", "warning"),
    (r"DeadlockLoserDataAccessException", "Deadlock", "critical"),
    (r"CircuitBreakerOpenException", "CircuitBreakerOpen", "warning"),
    (r"TooManyRequests|RateLimitExceeded", "RateLimit", "warning"),
    (r"StackOverflowError", "StackOverflow", "critical"),
    (r"Segmentation fault|SIGSEGV", "SegFault", "critical"),
]


def rule_based_detect(log: LogEntry) -> Optional[AnalysisResult]:
    """基于规则的快速异常检测。"""
    if log.level not in ("ERROR", "WARN", "FATAL"):
        return None

    for pattern, category, severity in ANOMALY_PATTERNS:
        if re.search(pattern, log.message + log.stack_trace, re.IGNORECASE):
            return AnalysisResult(
                is_anomaly=True,
                severity=severity,
                category=category,
                root_cause=f"规则匹配: {pattern}",
                suggestion="",
                confidence=0.9,
            )
    return None


def llm_root_cause_analysis(log: LogEntry) -> AnalysisResult:
    """调用 LLM 进行深度根因分析。"""
    prompt = f"""你是一位资深微服务运维专家。请分析以下异常日志,给出:
1. 异常类别
2. 严重程度(critical/warning/info)
3. 根因分析
4. 修复建议

日志信息:
- 时间: {log.timestamp}
- 服务: {log.service}
- 级别: {log.level}
- 消息: {log.message}
- TraceID: {log.trace_id}
- 堆栈: {log.stack_trace[:2000]}

请以 JSON 格式回答,字段: category, severity, root_cause, suggestion"""

    try:
        response = requests.post(
            LLM_API_URL,
            json={"prompt": prompt, "max_tokens": 1024, "temperature": 0.3},
            timeout=30,
        )
        result = response.json().get("response", "")
        # 解析 LLM 返回的 JSON(简化处理)
        return AnalysisResult(
            is_anomaly=True,
            severity="warning",
            category="Unknown",
            root_cause=result[:500],
            suggestion="请查看详细分析",
            confidence=0.7,
        )
    except Exception as e:
        return AnalysisResult(
            is_anomaly=True,
            severity="warning",
            category="AnalysisFailed",
            root_cause=f"LLM 分析失败: {str(e)}",
            suggestion="请人工排查",
            confidence=0.3,
        )


@app.post("/log/analyze", response_model=AnalysisResult)
async def analyze_log(log: LogEntry):
    """
    分析单条日志:
    1. 先用规则引擎快速匹配
    2. 规则未命中时调用 LLM 深度分析
    """
    # 第一层:规则引擎
    result = rule_based_detect(log)
    if result and result.confidence >= 0.8:
        return result

    # 第二层:LLM 深度分析
    return llm_root_cause_analysis(log)


@app.post("/log/batch-analyze")
async def batch_analyze(service: str, minutes: int = 10):
    """
    批量分析指定服务最近 N 分钟的 ERROR 日志。
    """
    query = {
        "query": {
            "bool": {
                "must": [
                    {"term": {"service": service}},
                    {"term": {"level": "ERROR"}},
                    {
                        "range": {
                            "timestamp": {
                                "gte": f"now-{minutes}m",
                                "lte": "now",
                            }
                        }
                    },
                ]
            }
        },
        "size": 100,
        "sort": [{"timestamp": {"order": "desc"}}],
    }

    result = es.search(index="microservice-logs-*", body=query)
    hits = result["hits"]["hits"]

    anomalies = []
    for hit in hits:
        source = hit["_source"]
        log = LogEntry(**source)
        analysis = await analyze_log(log)
        if analysis.is_anomaly:
            anomalies.append(
                {
                    "log": source,
                    "analysis": analysis.model_dump(),
                }
            )

    return {
        "service": service,
        "total_errors": len(hits),
        "anomalies_found": len(anomalies),
        "details": anomalies,
    }

5.3 Spring Boot 日志输出配置

xml 复制代码
<!-- logback-spring.xml -->
<configuration>
    <appender name="JSON_CONSOLE" class="ch.qos.logback.core.ConsoleEncoder">
        <encoder class="net.logstash.logback.encoder.LogstashEncoder">
            <includeMdc>true</includeMdc>
            <customFields>{
                "service": "${spring.application.name}",
                "env": "${SPRING_PROFILES_ACTIVE:default}"
            }</customFields>
        </encoder>
    </appender>

    <appender name="AI_ALERT" class="com.example.log.AiAlertAppender">
        <filter class="ch.qos.logback.classic.filter.ThresholdFilter">
            <level>ERROR</level>
        </filter>
    </appender>

    <root level="INFO">
        <appender-ref ref="JSON_CONSOLE"/>
        <appender-ref ref="AI_ALERT"/>
    </root>
</configuration>

5.4 自定义 AI 告警 Appender

java 复制代码
/**
 * 自定义 Logback Appender
 * 将 ERROR 级别日志实时发送到 AI 分析模块
 */
@Component
public class AiAlertAppender extends UnsynchronizedAppenderBase<ILoggingEvent> {

    private final RestTemplate restTemplate = new RestTemplate();

    @Value("${ai.log-analyzer.url:http://localhost:5001/log/analyze}")
    private String analyzerUrl;

    @Override
    protected void append(ILoggingEvent event) {
        if (!event.getLevel().isGreaterOrEqual(Level.ERROR)) {
            return;
        }

        Map<String, Object> logEntry = Map.of(
            "timestamp", Instant.ofEpochMilli(event.getTimeStamp()).toString(),
            "service", event.getLoggerName(),
            "level", event.getLevel().toString(),
            "message", event.getFormattedMessage(),
            "trace_id", MDC.get("traceId") != null ? MDC.get("traceId") : "",
            "stack_trace", getStackTrace(event)
        );

        try {
            restTemplate.postForEntity(analyzerUrl, logEntry, AnalysisResult.class);
        } catch (Exception e) {
            // 告警模块不可用时静默失败,不影响业务日志
            addWarn("AI 日志分析服务不可用: " + e.getMessage());
        }
    }

    private String getStackTrace(ILoggingEvent event) {
        if (event.getThrowableProxy() == null) return "";
        return StreamSupport.stream(
                event.getThrowableProxy().getStackTraceElementProxyArray(),
                false
            )
            .map(Object::toString)
            .collect(Collectors.joining("\n"));
    }
}

六、Kubernetes 部署编排

6.1 部署拓扑

35% 25% 15% 10% 5% 5% 5% 微服务集群资源分配(估算) Spring Cloud 业务服务 AI 推理服务 (GPU) Elasticsearch + Logstash Prometheus + Grafana Nacos 注册中心 Kafka 消息队列 其他基础设施

6.2 核心部署清单

yaml 复制代码
# k8s/ai-inference-deployment.yaml
apiVersion: apps/v1
kind: Deployment
metadata:
  name: ai-inference-service
  namespace: ai-platform
spec:
  replicas: 2
  selector:
    matchLabels:
      app: ai-inference
  template:
    metadata:
      labels:
        app: ai-inference
    spec:
      containers:
        - name: inference
          image: registry.example.com/ai-inference:v2.0.0
          ports:
            - containerPort: 5001
          resources:
            limits:
              nvidia.com/gpu: 1
              memory: "16Gi"
            requests:
              memory: "8Gi"
          env:
            - name: LLM_API_URL
              value: "http://llm-service:8000/v1/chat"
            - name: PROMETHEUS_URL
              value: "http://prometheus:9090"
            - name: ELASTICSEARCH_URL
              value: "http://elasticsearch:9200"
          livenessProbe:
            httpGet:
              path: /health
              port: 5001
            initialDelaySeconds: 30
            periodSeconds: 15
          readinessProbe:
            httpGet:
              path: /health
              port: 5001
            initialDelaySeconds: 10
            periodSeconds: 5

---
apiVersion: v1
kind: Service
metadata:
  name: ai-inference-service
  namespace: ai-platform
spec:
  selector:
    app: ai-inference
  ports:
    - port: 5001
      targetPort: 5001
  type: ClusterIP

七、效果评估与监控

7.1 关键指标看板

指标 说明 目标值
路由决策延迟 AI 路由服务的响应时间 < 50ms (P99)
路由准确率 AI 路由选择的实例是否最优 > 90%
故障检测时间 从异常发生到 AI 检测到的延迟 < 30s
自愈成功率 AI 触发的自愈操作成功恢复的比例 > 85%
日志分析准确率 AI 根因分析的准确程度 > 80%
误报率 AI 误报异常的比例 < 5%

7.2 Grafana 监控面板配置

yaml 复制代码
# grafana-dashboard-ai-metrics.yaml
apiVersion: v1
kind: ConfigMap
metadata:
  name: ai-metrics-dashboard
  namespace: monitoring
data:
  dashboard.json: |
    {
      "dashboard": {
        "title": "AI 增强微服务监控",
        "panels": [
          {
            "title": "AI 路由决策延迟",
            "type": "graph",
            "targets": [
              {
                "expr": "histogram_quantile(0.99, sum(rate(ai_route_decision_duration_seconds_bucket[5m])) by (le))"
              }
            ]
          },
          {
            "title": "AI 熔断器调整次数",
            "type": "stat",
            "targets": [
              {
                "expr": "sum(increase(ai_circuit_breaker_adjustments_total[1h]))"
              }
            ]
          },
          {
            "title": "日志异常检测量",
            "type": "graph",
            "targets": [
              {
                "expr": "sum(rate(ai_log_anomaly_detected_total[5m])) by (severity)"
              }
            ]
          }
        ]
      }
    }

八、总结与展望

本文详细介绍了 Spring Cloud + AI 的三大核心应用场景:

场景 传统方案痛点 AI 增强价值
智能路由 静态负载均衡,无法感知真实负载 基于多维指标的动态最优实例选择
故障自愈 固定阈值熔断,误报/漏报率高 动态阈值 + AI 降级响应生成
日志分析 人工排查,响应慢,经验依赖 自动异常检测 + 根因定位 + 修复建议

未来演进方向:

  • AIOps 平台化:将 AI 运维能力抽象为通用平台,支持多业务线接入
  • 多模态分析:结合指标、日志、链路追踪(Trace)进行跨维度关联分析
  • 强化学习:让 AI 从历史运维数据中学习,持续优化路由和熔断策略
  • 边缘部署:将轻量 AI 推理模型下沉到边缘节点,实现近端决策

参考资源:

相关推荐
Agent产品评测局2 小时前
能源行业自动化解决方案选型,安全与降本双提升:2026企业级智能体选型指南
运维·人工智能·安全·ai·chatgpt·自动化
知识靠谱2 小时前
【大模型智能体应用】Claude Code 的上下文记忆管理系统架构解析:7层渐进式设计的工程思考
人工智能
Codigger官方2 小时前
从一维到无限:Phoenix 语言如何用“矩阵思维”重塑 AI 时代的算力逻辑
人工智能·线性代数·矩阵
淬炼之火2 小时前
笔记:LoRA,一种针对大语言模型的参数高效微调方法
人工智能·笔记·语言模型
SpaceAIGlobal2 小时前
医护人员使用AI生成PPT:如何保证原始数据不被篡改?
人工智能·powerpoint
清晨0012 小时前
仓储管理系统(WMS)与制造执行系统(MES)之间的关系
大数据·人工智能·spring boot
Flying pigs~~2 小时前
从“计数”到“对话”:一文读懂语言模型与大语言模型 (LLM)
人工智能·语言模型·自然语言处理·大模型·llm·transformer
li星野2 小时前
DeepSeek提示词使用
人工智能·学习·deepseek
帐篷Li2 小时前
APIPark:一站式 AI 网关与 API 开发者门户深度解析
人工智能