【Python高级工程与架构实战】项目二:事件驱动微服务拆分(分布式版)

目录

项目二:事件驱动微服务拆分(分布式版)

[2.1 服务拆分与数据隔离](#2.1 服务拆分与数据隔离)

[2.1.1 数据库按服务拆分:订单服务PostgreSQL、库存服务独立实例](#2.1.1 数据库按服务拆分:订单服务PostgreSQL、库存服务独立实例)

[2.1.2 API组合与BFF层:GraphQL Federation网关整合多服务查询](#2.1.2 API组合与BFF层:GraphQL Federation网关整合多服务查询)

[2.1.3 服务发现:Consul客户端健康检查与负载均衡](#2.1.3 服务发现:Consul客户端健康检查与负载均衡)

[2.1.4 配置中心:Etcd配置热更新与Python-watch集成](#2.1.4 配置中心:Etcd配置热更新与Python-watch集成)

[2.2 异步通信架构](#2.2 异步通信架构)

[2.2.1 Kafka主题设计:order-events、inventory-events分区策略与Key选择](#2.2.1 Kafka主题设计:order-events、inventory-events分区策略与Key选择)

[2.2.2 生产者端:幂等生产者配置与事务消息发送(EOS语义)](#2.2.2 生产者端:幂等生产者配置与事务消息发送(EOS语义))

[2.2.3 消费者组管理:订单服务消费库存事件、手动提交偏移量策略](#2.2.3 消费者组管理:订单服务消费库存事件、手动提交偏移量策略)

[2.2.4 Schema Registry:Avro格式定义与向前兼容演化规则](#2.2.4 Schema Registry:Avro格式定义与向前兼容演化规则)

[2.3 分布式事务实现](#2.3 分布式事务实现)

[2.3.1 Saga模式:订单-库存-支付编排型Saga实现(Orchestration)](#2.3.1 Saga模式:订单-库存-支付编排型Saga实现(Orchestration))

[2.3.2 补偿事务设计:库存回滚逻辑与订单状态机逆向转换](#2.3.2 补偿事务设计:库存回滚逻辑与订单状态机逆向转换)

[2.3.3 幂等消费者:基于唯一键去重与Redis幂等缓存](#2.3.3 幂等消费者:基于唯一键去重与Redis幂等缓存)

[2.3.4 死信队列(DLQ):消费失败重试机制与人工介入告警](#2.3.4 死信队列(DLQ):消费失败重试机制与人工介入告警)

[2.4 数据一致性保障](#2.4 数据一致性保障)

[2.4.1 Outbox模式:事务性发件箱表设计与CDC(Debezium)捕获](#2.4.1 Outbox模式:事务性发件箱表设计与CDC(Debezium)捕获)

[2.4.2 读取模型最终一致:订单查询服务物化视图异步构建](#2.4.2 读取模型最终一致:订单查询服务物化视图异步构建)

[2.4.3 分布式缓存同步:库存扣减缓存失效广播](#2.4.3 分布式缓存同步:库存扣减缓存失效广播)

[2.4.4 一致性校验:对账服务定时扫描状态不一致订单](#2.4.4 一致性校验:对账服务定时扫描状态不一致订单)

[2.5 可观测性体系](#2.5 可观测性体系)

[2.5.1 分布式追踪:OpenTelemetry跨服务Trace上下文传递(Jaeger集成)](#2.5.1 分布式追踪:OpenTelemetry跨服务Trace上下文传递(Jaeger集成))

[2.5.2 结构化日志:JSON格式统一与ELK Stack聚合分析](#2.5.2 结构化日志:JSON格式统一与ELK Stack聚合分析)

[2.5.3 健康检查端点:Kubernetes Liveness/Readiness探针实现](#2.5.3 健康检查端点:Kubernetes Liveness/Readiness探针实现)

[2.5.4 混沌测试:Chaos Monkey随机杀死服务容器验证容错](#2.5.4 混沌测试:Chaos Monkey随机杀死服务容器验证容错)

[2.1.1 数据库按服务拆分](#2.1.1 数据库按服务拆分)

[2.1.2 API组合与BFF层](#2.1.2 API组合与BFF层)

[2.1.3 服务发现](#2.1.3 服务发现)

[2.1.4 配置中心](#2.1.4 配置中心)

[2.2.1 Kafka主题设计](#2.2.1 Kafka主题设计)

[2.2.2 幂等生产者](#2.2.2 幂等生产者)

[2.2.3 消费者组管理](#2.2.3 消费者组管理)

[2.2.4 Schema Registry](#2.2.4 Schema Registry)

[2.3.1 Saga模式(Orchestration)](#2.3.1 Saga模式(Orchestration))

[2.3.2 补偿事务设计](#2.3.2 补偿事务设计)

[2.3.3 幂等消费者](#2.3.3 幂等消费者)

[2.3.4 死信队列(DLQ)](#2.3.4 死信队列(DLQ))

[2.4.1 Outbox模式与CDC](#2.4.1 Outbox模式与CDC)

[2.4.2 读取模型最终一致](#2.4.2 读取模型最终一致)

[2.4.3 分布式缓存同步](#2.4.3 分布式缓存同步)

[2.4.4 一致性校验](#2.4.4 一致性校验)

[2.5.1 分布式追踪](#2.5.1 分布式追踪)

[2.5.2 结构化日志](#2.5.2 结构化日志)

[2.5.3 健康检查端点](#2.5.3 健康检查端点)

[2.5.4 混沌测试](#2.5.4 混沌测试)


项目二:事件驱动微服务拆分(分布式版)

2.1 服务拆分与数据隔离

2.1.1 数据库按服务拆分:订单服务PostgreSQL、库存服务独立实例

数据库按服务拆分(Database per Service)模式构成了微服务数据隔离的基石。该模式强制每个微服务拥有独立的数据库实例,从根本上消除服务间的数据耦合。在订单服务与库存服务的拆分场景中,订单服务采用PostgreSQL作为主存储,库存服务则部署完全独立的PostgreSQL实例。这种物理隔离确保了服务自治性------订单Schema的变更不会影响库存Schema的演进,反之亦然。

服务边界与数据边界的一致性至关重要。订单服务应包含订单创建、状态流转、历史查询等所有与订单相关的数据操作,库存服务则独占SKU管理、库存扣减、补货预警等数据域。每个服务对其数据模型拥有完全所有权,包括索引优化、备份策略、复制拓扑等运维决策。这种模式虽然引入了数据一致性的挑战,但通过牺牲即时一致性换取了长期的可维护性与团队独立性。

2.1.2 API组合与BFF层:GraphQL Federation网关整合多服务查询

GraphQL Federation架构通过子图(Subgraph)组合机制解决了微服务场景下的API聚合难题。不同于传统的Schema拼接(Schema Stitching),Federation赋予每个微服务完整的Schema所有权,同时通过网关层(Gateway/Router)实现统一查询入口。订单服务、库存服务各自定义独立的子图Schema,使用@key指令标识跨服务实体(如Order、Product),网关负责解析实体关联并编排查询计划。

Backend-for-Frontend(BFF)层在Federation架构中承担着查询优化的职责。移动端与Web端对数据粒度需求各异,BFF层通过@require等指令控制字段获取范围,避免过度取数(Over-fetching)。网关执行查询时采用并行请求策略向各子图分发查询片段,利用DataLoader批处理机制消除N+1查询问题。实体解析过程中,网关首先定位主键实体,随后通过@external字段扩展跨服务属性,最终组装为统一的GraphQL响应。

2.1.3 服务发现:Consul客户端健康检查与负载均衡

Consul作为分布式服务发现与配置工具,通过集成健康检查机制实现智能流量分发。在微服务架构中,服务实例动态伸缩,Consul的DNS接口与HTTP API提供实时服务目录查询。健康检查分为多种类型:HTTP检查周期性探测服务健康端点,TCP检查验证端口连通性,脚本检查执行自定义诊断逻辑。只有通过健康检查的实例才会被纳入服务解析结果集。

负载均衡策略依托Consul的DNS响应轮询机制实现。当客户端查询order-service.service.consul时,Consul返回当前健康实例的随机化IP列表,内核DNS解析器或客户端负载均衡器在此基础上实施轮询或最少连接算法。对于需要更精细控制的场景,Consul Template可动态生成Nginx或HAProxy配置文件,实现基于服务状态的反向代理路由。这种模式确保故障实例在秒级时间内被流量剔除,配合Auto Scaling实现故障自愈。

2.1.4 配置中心:Etcd配置热更新与Python-watch集成

Etcd作为高可用的分布式键值存储,为微服务提供可靠的配置管理基础设施。其基于Raft共识算法实现线性一致性读写,通过Watch机制支持配置变更的实时推送。在Python生态中,etcd3客户端库实现了租约(Lease)机制用于服务注册会话保持,以及事务(Transaction)操作支持原子性比较-交换(Compare-and-Swap)逻辑。

配置热更新架构依赖Watch回调机制实现。Python应用启动时建立与Etcd的长连接,订阅特定前缀(如/config/order-service/)的变更事件。当运维人员通过Etcdctl或管理界面更新配置项时,Watch回调函数触发应用内的配置重载逻辑,无需重启进程即可生效。对于敏感配置,可结合事务接口实现乐观锁更新,确保并发修改的安全性。配置的层级结构通过前缀查询支持,便于按服务、环境、版本维度组织配置空间。

2.2 异步通信架构

2.2.1 Kafka主题设计:order-events、inventory-events分区策略与Key选择

Apache Kafka的主题分区策略直接影响事件消费的并行度与顺序保证。在订单事件(order-events)与库存事件(inventory-events)的设计中,分区键(Partition Key)的选择决定了事件的物理分布。对于订单创建、支付、发货等事件,通常以order_id作为分区键,确保同一订单的所有事件按产生顺序写入同一分区,消费者端按序处理。库存扣减事件则以sku_idwarehouse_id作为分区键,保证单个SKU的库存操作顺序性,同时实现跨SKU的并行消费。

分区数量设计需权衡吞吐量与顺序需求。订单主题可根据峰值TPS设置较多分区(如12-24分区),配合消费者组实现水平扩展。库存主题若存在热点SKU,可采用基于哈希的分区策略分散压力,或采用自定义分区器实现按区域/仓库的亲和性路由。副本因子(Replication Factor)通常设为3,确保在Broker故障时数据可用性。主题的保留策略(Retention Policy)依据业务特性配置,订单事件通常保留7-30天,支持审计与重放需求。

2.2.2 生产者端:幂等生产者配置与事务消息发送(EOS语义)

Kafka的Exactly-Once Semantics(EOS)通过幂等生产者(Idempotent Producer)与事务API的组合实现端到端恰好一次投递。幂等生产者通过enable.idempotence=true配置激活,Broker端为每个生产者实例分配唯一PID(Producer ID),结合序列号(Sequence Number)机制自动去重。该机制确保即使生产者遭遇网络超时并重试,同一消息也不会在日志中重复写入。

事务消息发送适用于跨分区、跨主题的原子性写入场景。生产者初始化时指定transactional.id,该标识符与生产者实例强绑定,确保会话连续性。事务边界通过beginTransaction()commitTransaction()界定,期间发送的所有消息构成原子批次。若处理逻辑异常,调用abortTransaction()回滚未提交消息。事务协调器(Transaction Coordinator)维护事务日志,消费者通过isolation.level=read_committed配置仅读取已提交事务消息,避免脏读。

2.2.3 消费者组管理:订单服务消费库存事件、手动提交偏移量策略

消费者组(Consumer Group)机制实现了事件消费的负载均衡与容错。订单服务消费库存事件时,Kafka自动分配分区给消费者实例,确保同一分区仅由一个活跃消费者处理。手动偏移量提交(Manual Offset Commit)策略赋予应用精确控制消费进度的能力。在处理库存扣减成功事件时,业务逻辑执行完成且数据库事务提交后,方可调用commitSync()commitAsync()确认消费位点。

偏移量提交策略的选择涉及一致性与性能的权衡。同步提交确保偏移量与业务状态原子性,但增加延迟;异步提交提升吞吐,但存在重复消费风险。对于订单状态机更新等关键操作,建议采用同步提交配合重试机制;对于日志聚合等非关键场景,可采用批量异步提交。再均衡(Rebalance)发生时,消费者通过再均衡监听器(Rebalance Listener)实现分区撤销前的偏移量刷盘,防止分区迁移导致的重复处理。

2.2.4 Schema Registry:Avro格式定义与向前兼容演化规则

Confluent Schema Registry为Kafka消息提供中心化Schema管理能力,支持Avro、Protobuf、JSON Schema等格式。Avro作为二进制序列化格式,通过Schema定义实现紧凑编码与强类型约束。事件Schema定义在独立仓库中版本化管理,生产者序列化前向Registry验证Schema兼容性,消费者反序列时获取对应版本Schema,实现前后向兼容的数据演化。

兼容性规则约束Schema的演进方式。向前兼容(Forward Compatibility)要求新Schema能被旧消费者读取,通常通过添加具有默认值的字段实现;向后兼容(Backward Compatibility)要求旧Schema能被新消费者读取,允许删除具有默认值的字段。全兼容(Full Compatibility)同时满足双向要求。订单事件Schema演进时,新增字段(如delivery_estimate)必须提供默认值,确保旧版本消费者不会解析失败。Schema版本号自动递增,Registry拒绝破坏兼容性规则的变更提交。

2.3 分布式事务实现

2.3.1 Saga模式:订单-库存-支付编排型Saga实现(Orchestration)

Saga模式通过长事务拆分与补偿机制解决分布式事务一致性难题。编排型Saga(Orchestration Saga)引入中央协调器(Saga Orchestrator)作为事务总指挥,订单-库存-支付流程中,协调器依次向订单服务发送创建指令、向库存服务发送预留指令、向支付服务发送扣款指令,各服务完成本地事务后向协调器汇报状态。这种集中式控制消除了服务间的隐式依赖,事务流程可视化程度显著提升。

协调器内部维护 Saga 实例的状态机,记录每个参与服务的执行状态。当订单创建成功但库存预留失败时,协调器触发补偿流程:向订单服务发送状态回滚指令,将订单标记为已取消。补偿事务的执行顺序与正向事务相反,确保已完成的操作被有序撤销。协调器本身需具备高可用设计,通常采用持久化状态存储(如数据库或事件日志),故障恢复后从断点继续 Saga 执行,避免事务悬挂。

2.3.2 补偿事务设计:库存回滚逻辑与订单状态机逆向转换

补偿事务(Compensating Transaction)的设计遵循业务语义的可逆性原则。库存回滚逻辑并非简单地将库存数量加回,而是创建库存回滚记录(Rollback Record),记录原扣减操作的反向操作。这种审计友好的设计保留了完整操作轨迹,支持事后对账与审计追溯。库存服务接收到补偿指令时,需验证原扣减操作是否存在且未被撤销,防止重复补偿导致的库存虚高。

订单状态机逆向转换需处理状态冲突。订单状态从"已支付"回滚至"已取消"时,需验证当前状态是否允许逆向转换------仅当订单未进入发货流程时才允许取消。状态转换规则通过状态机引擎强制执行,无效转换触发异常并告警。补偿事务的幂等性设计至关重要,即使协调器因网络超时重复发送补偿指令,库存服务与订单服务也应识别重复请求并返回已处理状态,确保最终一致性。

2.3.3 幂等消费者:基于唯一键去重与Redis幂等缓存

幂等消费者通过唯一标识符(Idempotency Key)机制消除消息重复处理的影响。订单服务处理库存事件时,提取事件头中的event_id或业务键(如order_id+event_type),在Redis中查询该键是否存在。若键已存在且TTL未过期,直接返回成功响应而不执行业务逻辑;若键不存在,执行业务操作后将键写入Redis并设置过期时间(通常24-48小时),覆盖消息重试窗口期。

Redis幂等缓存的存储结构设计需考虑内存效率。采用HyperLogLog或Bloom Filter进行存在性检测可降低内存占用,但存在极小概率的误判;采用精确去重则使用Set或String结构,以业务唯一键为键,处理结果为值。对于库存扣减等关键操作,结合数据库唯一约束(Unique Constraint)实现双重保险:消费端先在Redis去重,业务写入时依赖数据库约束拦截残余重复。过期策略平衡存储成本与防重需求,通常设置为Saga最大超时时间的2-3倍。

2.3.4 死信队列(DLQ):消费失败重试机制与人工介入告警

死信队列(Dead Letter Queue, DLQ)作为消费异常处理的最终防线,隔离持续失败的消息以防止阻塞主消费流。Kafka消费者配置max.poll.recordsmax.retries参数,当消息处理失败时,若重试次数未达上限,消息回滚至分区等待下次拉取;若重试耗尽,消息被投递至关联的DLQ主题(如order-events-dlq)。DLQ消息保留原始主题、分区、偏移量及异常堆栈信息,支持问题诊断与重放。

重试机制采用指数退避策略(Exponential Backoff),首次重试间隔1秒,后续按2的幂次递增,最大间隔不超过5分钟,避免瞬时故障(Transient Failure)引发高频无效重试。人工介入告警通过监控DLQ消息堆积量触发,当DLQ在过去15分钟内新增消息超过阈值,或单条消息在DLQ中滞留超过1小时,向运维团队发送高优先级告警。自动化修复流程可定期扫描DLQ,对已知异常模式(如Schema不兼容)的消息自动修复并重投主主题。

2.4 数据一致性保障

2.4.1 Outbox模式:事务性发件箱表设计与CDC(Debezium)捕获

Outbox模式通过数据库事务的原子性解决双写问题(Dual Write Problem)。在订单服务中,业务表(orders)与发件箱表(outbox)共享同一数据库事务,订单状态变更与事件记录以原子方式提交。发件箱表结构包含事件ID、事件类型、聚合根ID(如订单ID)、payload(JSON格式)及创建时间戳。这种设计确保业务状态变更与事件发布要么同时成功,要么同时回滚,消除部分提交(Partial Commit)风险。

Debezium作为CDC(Change Data Capture)引擎,持续监控发件箱表的Binlog变更流。当新事件行插入发件箱表时,Debezium捕获该变更并转换为Kafka消息,投递至对应主题后发件箱记录可被清除或标记为已处理。事件路由通过Debezium的Outbox Event Router实现,支持基于事件类型的主题映射与Payload结构转换。该架构将事件发布与业务代码解耦,应用无需关心消息中间件的可用性,仅依赖数据库事务保证一致性。

2.4.2 读取模型最终一致:订单查询服务物化视图异步构建

CQRS(Command Query Responsibility Segregation)模式将读写模型分离,命令端处理状态变更,查询端优化读取效率。在订单查询服务中,物化视图(Materialized View)通过异步投影(Projection)机制构建:Kafka Connect或独立投影服务消费订单事件流,将事件转换为适合查询的扁平化结构写入Elasticsearch或MongoDB。这种异步构建允许查询模型滞后于命令模型,通常滞后时间在毫秒至秒级,满足最终一致性要求。

物化视图的设计针对查询场景优化。面向客户的订单列表视图预计算订单总金额、商品摘要、物流状态等字段,避免多表Join;面向运营的聚合视图按时间段统计订单量与GMV,利用Elasticsearch的聚合能力实现近实时分析。视图构建过程需处理事件乱序(Event Disorder)与重复投递,通过事件版本号或时间戳检测乱序事件,幂等写入保证重复事件不造成数据异常。当投影服务故障恢复后,从上次提交的偏移量继续消费,确保视图最终收敛至正确状态。

2.4.3 分布式缓存同步:库存扣减缓存失效广播

分布式缓存(如Redis)与数据库的一致性维护通过缓存失效模式(Cache Invalidation)实现。库存扣减操作先更新数据库,成功后发布缓存失效事件至Redis Pub/Sub或Kafka,通知各节点删除对应SKU的缓存条目。后续查询触发缓存穿透,从数据库加载最新值并回填缓存。这种异步失效机制容忍短暂的不一致窗口,确保高并发场景下数据库与缓存的最终一致。

缓存失效广播需处理消息丢失与网络分区。采用延迟双删策略:数据库更新前删除缓存(预删),更新成功后再次删除(确认删),两次删除间隔500毫秒以覆盖并发读窗口。对于库存热点SKU,结合本地缓存(Caffeine等)与分布式缓存构建多级缓存架构,本地缓存TTL短(30秒),分布式缓存TTL较长(5分钟),通过广播机制同步本地缓存失效。库存回滚操作同样触发失效广播,确保缓存层及时感知状态回退。

2.4.4 一致性校验:对账服务定时扫描状态不一致订单

对账服务作为数据一致性的离线保障机制,周期性扫描 Saga 超时未完结或状态异常的订单。扫描逻辑对比订单服务、库存服务、支付服务的聚合根状态,识别"已支付但库存未扣减"、"已发货但支付失败"等不一致场景。对账任务采用批处理模式,按时间窗口(如前24小时)分批加载订单,并行查询各服务状态,差异记录写入对账差异表(Reconciliation Diff Table)。

自动修复策略依据不一致类型执行。对于可自动修复的场景(如因网络超时导致的补偿未执行),对账服务触发补偿事务重试;对于需人工介入的场景(如金额差异),生成工单并通知业务团队。对账服务的调度频率与业务特性匹配,金融类 Saga 采用高频扫描(每5分钟),常规电商 Saga 采用小时级扫描。扫描结果生成一致性报表,量化系统的最终一致性达成率(Eventual Consistency Achievement Rate),作为系统健康度指标。

2.5 可观测性体系

2.5.1 分布式追踪:OpenTelemetry跨服务Trace上下文传递(Jaeger集成)

OpenTelemetry为微服务架构提供 vendor-neutral 的遥测数据采集框架,分布式追踪通过Trace上下文(Trace Context)传递实现请求链路可视化。当请求进入网关,生成唯一的Trace ID与初始Span ID,通过HTTP头(traceparenttracestate)或消息头传递至下游服务。订单服务接收到库存查询请求时,提取上游Span ID作为Parent Span,创建子Span记录业务处理耗时,再将上下文传递至数据库客户端与Kafka生产者。

Jaeger作为追踪后端存储与可视化引擎,收集各服务上报的Span数据,构建端到端的调用拓扑图。Span语义约定(Semantic Conventions)定义标准属性:HTTP方法、状态码、数据库查询语句、消息主题等,确保跨语言、跨框架的一致性。在Saga编排场景中,协调器创建顶层Span代表整个事务流程,各参与服务的本地事务作为子Span,通过 baggage 传递 Saga ID,实现跨服务的业务事务关联。采样策略采用头部采样(Head-based Sampling)或比率采样(Probability Sampling),生产环境通常设置1-10%采样率平衡精度与存储成本。

2.5.2 结构化日志:JSON格式统一与ELK Stack聚合分析

结构化日志以机器可解析的格式(通常为JSON)记录应用事件,取代传统的自由文本日志。字段标准化包括:时间戳(ISO 8601格式)、日志级别、服务名称、Trace ID、Span ID、线程标识、消息体及自定义业务字段。JSON格式使Elasticsearch无需正则解析即可索引字段,支持精确查询与聚合分析。日志收集通过Filebeat或Fluentd代理实现,从容器标准输出或日志文件采集,经Logstash解析增强后写入Elasticsearch存储。

ELK(Elasticsearch, Logstash, Kibana)Stack提供日志的全文检索与可视化能力。Kibana仪表盘按服务、日志级别、错误类型维度展示日志分布,结合Trace ID过滤可快速定位特定请求的所有日志条目。日志与追踪的关联通过在日志中注入Trace ID实现,用户从Jaeger界面跳转至Kibana查看对应Span的详细日志上下文。告警规则基于日志内容配置,如1分钟内ERROR级别日志超过阈值即触发PagerDuty通知,实现从被动排查到主动发现的模式转变。

2.5.3 健康检查端点:Kubernetes Liveness/Readiness探针实现

Kubernetes探针机制(Probe)通过HTTP端点或TCP端口检测容器健康状态,决定流量路由与容器生命周期。Liveness探针检测应用是否存活,连续失败触发容器重启,适用于死锁或内存泄漏导致的应用僵死;Readiness探针检测应用是否准备好接收流量,失败时从Service Endpoints列表移除容器,确保请求不发送至启动中或超载的实例。探针配置包括初始延迟(initialDelaySeconds)、检测间隔(periodSeconds)、超时时间(timeoutSeconds)及失败阈值(failureThreshold)。

健康检查端点的实现需区分 shallow check 与 deep check。Liveness探针通常采用轻量级HTTP 200响应,仅验证进程存活;Readiness探针执行深度检查,验证数据库连接池可用性、Kafka消费者组活跃状态、缓存连通性等依赖项。对于Saga协调器服务,Readiness检查还包括与事务日志存储的连通性验证。探针端点应避免副作用,如数据库检查使用SELECT 1而非业务查询,防止健康检查本身产生负载。gRPC服务通过gRPC Health Checking Protocol实现探针,Kubernetes 1.23+原生支持gRPC探针配置。

2.5.4 混沌测试:Chaos Monkey随机杀死服务容器验证容错

混沌工程(Chaos Engineering)通过在生产环境注入故障验证系统韧性。Chaos Monkey作为Netflix开源的故障注入工具,随机终止Kubernetes Pod或EC2实例,强制验证Auto Scaling、服务发现、熔断降级机制的实效性。Kube-monkey针对Kubernetes设计,通过Pod注解(Annotation)标识可牺牲目标,配置平均故障间隔(MTBF)与每日杀戮时间窗口,仅在工作日执行以确保人工介入能力。

混沌实验遵循科学方法论:定义稳态假设(如P99延迟<200ms),注入故障(随机杀死订单服务实例),观测偏离程度,自动化回滚若安全阈值 breached。实验范围从单个Pod级联至整个可用区(Availability Zone),验证多层级容错。针对Saga架构,特意在协调器执行补偿事务时杀死协调器Pod,验证状态持久化与恢复机制;在库存扣减过程中杀死库存服务,验证消息重投与幂等处理。实验结果录入韧性评分系统,量化系统的Mean Time To Recovery(MTTR)与故障注入后的错误率变化,指导架构优化。


参考来源:: Analysis of Design Patterns and Benchmark Practices in Apache Kafka Event‑Streaming Systems, arXiv 2025.: Chaos Engineering in Self-Adaptive Microservice Systems, arXiv 2019.: Distributed Tracing in Microservices: How It Actually Works, Dash0 2026.: Synthetic Microservice Topology Generation Framework, arXiv 2024.: Saga Pattern in Microservices: Orchestration vs Choreography, Pavan Rangani 2026.: Load Balancing Strategies for Consul, HashiCorp 2026.: Implementing GraphQL Federation for Enterprise Microservices, Boolean & Beyond 2026.: GraphQL Federation Architecture Guide, Boolean & Beyond 2026.: Saga Pattern Choreography vs Orchestration, Jovis Daily 2026.: How to Use GraphQL Federation for Microservices, OneUptime 2026.: How to Use Chaos Monkey for Resilience Testing, OneUptime 2026.: Learning Distributed Tracing GitHub Repository, Raj Kundalia 2026.: How to Implement Exactly-Once Semantics in Kafka Producers, OneUptime 2026.: Kube-monkey and Chaos Engineering in Kubernetes, Theodo 2026.: Saga Orchestration in SaaS, Jovis Daily 2026. : How to Implement etcd for Distributed Configuration, OneUptime 2026. : Outbox Pattern for Reliable Event Publishing, Conduktor 2026.

2.1.1 数据库按服务拆分

复制代码
#!/usr/bin/env python3
"""
【2.1.1】数据库按服务拆分:订单服务PostgreSQL、库存服务独立实例
内容:实现多数据库路由、服务级数据隔离、数据库连接池管理
依赖:sqlalchemy>=2.0, psycopg2-binary, tenacity
"""

from __future__ import annotations
import os
from contextlib import contextmanager
from typing import Dict, Optional, Type, Generator, Any
from abc import ABC, abstractmethod
from dataclasses import dataclass

from sqlalchemy import create_engine, Engine
from sqlalchemy.orm import sessionmaker, Session, declarative_base
from sqlalchemy.pool import QueuePool
from tenacity import retry, stop_after_attempt, wait_exponential


@dataclass(frozen=True)
class DatabaseConfig:
    """数据库配置值对象"""
    host: str
    port: int
    database: str
    user: str
    password: str
    pool_size: int = 10
    max_overflow: int = 20
    pool_timeout: int = 30
    
    @property
    def connection_string(self) -> str:
        return f"postgresql://{self.user}:{self.password}@{self.host}:{self.port}/{self.database}"


class ServiceDatabase(ABC):
    """
    服务级数据库抽象
    每个微服务拥有独立数据库实例,实现物理隔离
    """
    
    def __init__(self, config: DatabaseConfig):
        self._config = config
        self._engine: Optional[Engine] = None
        self._session_factory: Optional[sessionmaker] = None
        self._initialize()
    
    def _initialize(self) -> None:
        """初始化连接池"""
        self._engine = create_engine(
            self._config.connection_string,
            poolclass=QueuePool,
            pool_size=self._config.pool_size,
            max_overflow=self._config.max_overflow,
            pool_timeout=self._config.pool_timeout,
            pool_pre_ping=True,  # 连接健康检查
            echo=False
        )
        self._session_factory = sessionmaker(bind=self._engine)
    
    @property
    def engine(self) -> Engine:
        return self._engine
    
    @contextmanager
    def session(self) -> Generator[Session, None, None]:
        """提供事务性数据库会话上下文"""
        session = self._session_factory()
        try:
            yield session
            session.commit()
        except Exception:
            session.rollback()
            raise
        finally:
            session.close()
    
    @retry(stop=stop_after_attempt(3), wait=wait_exponential(multiplier=1, min=4, max=10))
    def health_check(self) -> bool:
        """健康检查"""
        try:
            with self._engine.connect() as conn:
                conn.execute("SELECT 1")
            return True
        except Exception:
            return False
    
    @abstractmethod
    def get_service_name(self) -> str:
        raise NotImplementedError


class OrderServiceDatabase(ServiceDatabase):
    """订单服务数据库"""
    def get_service_name(self) -> str:
        return "order-service"


class InventoryServiceDatabase(ServiceDatabase):
    """库存服务数据库"""
    def get_service_name(self) -> str:
        return "inventory-service"


class PaymentServiceDatabase(ServiceDatabase):
    """支付服务数据库"""
    def get_service_name(self) -> str:
        return "payment-service"


class DatabaseRouter:
    """
    数据库路由器
    根据服务标识路由到对应数据库实例
    """
    
    def __init__(self):
        self._databases: Dict[str, ServiceDatabase] = {}
    
    def register(self, service_name: str, database: ServiceDatabase) -> None:
        """注册服务数据库"""
        self._databases[service_name] = database
    
    def get_database(self, service_name: str) -> ServiceDatabase:
        """获取服务数据库"""
        if service_name not in self._databases:
            raise ValueError(f"未注册服务数据库: {service_name}")
        return self._databases[service_name]
    
    def route_by_entity(self, entity_class: Type) -> ServiceDatabase:
        """
        根据实体类路由数据库
        例如:Order -> order-service数据库
        """
        service_map = {
            "Order": "order-service",
            "OrderLine": "order-service",
            "Inventory": "inventory-service",
            "Payment": "payment-service"
        }
        service_name = service_map.get(entity_class.__name__, "default")
        return self.get_database(service_name)


# 全局路由器实例
db_router = DatabaseRouter()


def configure_databases():
    """配置多数据库"""
    order_db = OrderServiceDatabase(DatabaseConfig(
        host=os.getenv("ORDER_DB_HOST", "localhost"),
        port=int(os.getenv("ORDER_DB_PORT", "5432")),
        database="order_db",
        user="order_user",
        password=os.getenv("ORDER_DB_PASSWORD", "secret")
    ))
    
    inventory_db = InventoryServiceDatabase(DatabaseConfig(
        host=os.getenv("INVENTORY_DB_HOST", "localhost"),
        port=int(os.getenv("INVENTORY_DB_PORT", "5433")),
        database="inventory_db",
        user="inventory_user",
        password=os.getenv("INVENTORY_DB_PASSWORD", "secret")
    ))
    
    db_router.register("order-service", order_db)
    db_router.register("inventory-service", inventory_db)


if __name__ == "__main__":
    configure_databases()
    order_db = db_router.get_database("order-service")
    print(f"订单服务数据库状态: {'健康' if order_db.health_check() else '异常'}")

2.1.2 API组合与BFF层

复制代码
#!/usr/bin/env python3
"""
【2.1.2】API组合与BFF层:GraphQL Federation网关整合多服务查询
内容:实现Federation网关、服务Schema组合、跨服务查询解析
依赖:ariadne>=0.19, gql>=3.4, httpx>=0.24
"""

from __future__ import annotations
import asyncio
from typing import Dict, List, Optional, Any, Callable
from dataclasses import dataclass
from abc import ABC, abstractmethod

import httpx
from ariadne import QueryType, make_executable_schema, graphql_sync
from ariadne.asgi import GraphQL


@dataclass
class ServiceSchema:
    """微服务GraphQL Schema定义"""
    name: str
    url: str
    type_defs: str
    resolvers: Dict[str, Callable]


class FederatedGateway:
    """
    GraphQL Federation网关
    实现模式组合与跨服务查询路由
    """
    
    def __init__(self):
        self._services: Dict[str, ServiceSchema] = {}
        self._gateway_schema: Optional[str] = None
        self._resolvers = {}
    
    def register_service(self, service: ServiceSchema) -> None:
        """注册微服务Schema"""
        self._services[service.name] = service
        self._rebuild_gateway()
    
    def _rebuild_gateway(self) -> None:
        """重建网关Schema(模式组合)"""
        # 组合所有服务的类型定义
        combined_types = ["type Query"]
        
        for service in self._services.values():
            combined_types.append(service.type_defs)
        
        self._gateway_schema = "\n".join(combined_types)
    
    async def execute(self, query: str, variables: Optional[Dict] = None) -> Dict[str, Any]:
        """
        执行跨服务查询
        解析查询字段并路由到对应微服务
        """
        # 解析顶层查询字段
        query_fields = self._extract_query_fields(query)
        
        # 并行请求各服务
        tasks = []
        for field in query_fields:
            service = self._find_service_for_field(field)
            if service:
                tasks.append(self._query_service(service, field, variables))
        
        results = await asyncio.gather(*tasks, return_exceptions=True)
        
        # 合并结果
        merged_data = {}
        for result in results:
            if isinstance(result, dict) and "data" in result:
                merged_data.update(result["data"])
        
        return {"data": merged_data}
    
    def _extract_query_fields(self, query: str) -> List[str]:
        """提取查询字段(简化解析)"""
        import re
        fields = re.findall(r'(\w+)\s*[{\(]', query)
        return fields
    
    def _find_service_for_field(self, field: str) -> Optional[ServiceSchema]:
        """查找字段归属服务"""
        for service in self._services.values():
            if field in service.resolvers:
                return service
        return None
    
    async def _query_service(self, service: ServiceSchema, field: str, 
                            variables: Optional[Dict]) -> Dict:
        """查询具体微服务"""
        async with httpx.AsyncClient() as client:
            response = await client.post(
                service.url,
                json={"query": f"query {{ {field} {{ id }} }}"}  # 简化查询
            )
            return response.json()


class OrderServiceClient:
    """订单服务GraphQL客户端"""
    
    def __init__(self, base_url: str):
        self._base_url = base_url
        self._schema = ServiceSchema(
            name="order-service",
            url=f"{base_url}/graphql",
            type_defs="""
                type Order {
                    id: ID!
                    customerId: String!
                    totalAmount: Float!
                    status: String!
                }
                extend type Query {
                    order(id: ID!): Order
                    orders(customerId: String!): [Order!]!
                }
            """,
            resolvers={"order": None, "orders": None}
        )
    
    def get_schema(self) -> ServiceSchema:
        return self._schema


class InventoryServiceClient:
    """库存服务GraphQL客户端"""
    
    def __init__(self, base_url: str):
        self._base_url = base_url
        self._schema = ServiceSchema(
            name="inventory-service",
            url=f"{base_url}/graphql",
            type_defs="""
                type Inventory {
                    sku: String!
                    quantity: Int!
                    reserved: Int!
                }
                extend type Query {
                    inventory(sku: String!): Inventory
                }
            """,
            resolvers={"inventory": None}
        )
    
    def get_schema(self) -> ServiceSchema


# FastAPI集成示例
from fastapi import FastAPI, Request

app = FastAPI()
gateway = FederatedGateway()

@app.on_event("startup")
async def setup_gateway():
    """启动时注册所有服务"""
    gateway.register_service(OrderServiceClient("http://order-service:8000").get_schema())
    gateway.register_service(InventoryServiceClient("http://inventory-service:8000").get_schema())

@app.post("/graphql")
async def graphql_endpoint(request: Request):
    """GraphQL入口"""
    body = await request.json()
    result = await gateway.execute(body.get("query"), body.get("variables"))
    return result


if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=4000)

2.1.3 服务发现

复制代码
#!/usr/bin/env python3
"""
【2.1.3】服务发现:Consul客户端健康检查与负载均衡
内容:实现Consul服务注册、健康检查、客户端负载均衡
依赖:python-consul>=1.1, aiohttp>=3.8
"""

from __future__ import annotations
import asyncio
import socket
import random
from typing import List, Dict, Optional, Callable
from dataclasses import dataclass
from abc import ABC, abstractmethod

import consul
import aiohttp
from aiohttp import ClientSession


@dataclass
class ServiceInstance:
    """服务实例"""
    id: str
    name: str
    host: str
    port: int
    tags: List[str]
    metadata: Dict[str, str]
    healthy: bool = True


class ServiceRegistry(ABC):
    """服务注册表抽象"""
    
    @abstractmethod
    def register(self, instance: ServiceInstance) -> bool:
        raise NotImplementedError
    
    @abstractmethod
    def deregister(self, instance_id: str) -> bool:
        raise NotImplementedError
    
    @abstractmethod
    def discover(self, service_name: str) -> List[ServiceInstance]:
        raise NotImplementedError


class ConsulRegistry(ServiceRegistry):
    """
    Consul服务注册与发现实现
    """
    
    def __init__(self, host: str = "localhost", port: int = 8500):
        self._client = consul.Consul(host=host, port=port)
        self._service_id: Optional[str] = None
    
    def register(self, instance: ServiceInstance) -> bool:
        """注册服务实例"""
        check = consul.Check.http(
            url=f"http://{instance.host}:{instance.port}/health",
            interval="10s",
            timeout="5s",
            deregister="30s"
        )
        
        success = self._client.agent.service.register(
            name=instance.name,
            service_id=instance.id,
            address=instance.host,
            port=instance.port,
            tags=instance.tags,
            check=check,
            meta=instance.metadata
        )
        
        self._service_id = instance.id
        return success
    
    def deregister(self, instance_id: str) -> bool:
        """注销服务"""
        return self._client.agent.service.deregister(instance_id)
    
    def discover(self, service_name: str) -> List[ServiceInstance]:
        """发现健康服务实例"""
        _, services = self._client.health.service(service_name, passing_only=True)
        
        instances = []
        for svc in services:
            service = svc["Service"]
            checks = svc["Checks"]
            healthy = all(check["Status"] == "passing" for check in checks)
            
            instances.append(ServiceInstance(
                id=service["ID"],
                name=service["Service"],
                host=service["Address"],
                port=service["Port"],
                tags=service.get("Tags", []),
                metadata=service.get("Meta", {}),
                healthy=healthy
            ))
        
        return instances
    
    def watch_service(self, service_name: str, callback: Callable[[List[ServiceInstance]], None]):
        """监听服务变化(简化版,实际应使用长轮询或阻塞查询)"""
        # Consul支持阻塞查询实现实时通知
        pass


class LoadBalancer:
    """
    客户端负载均衡器
    实现轮询、随机、加权等策略
    """
    
    def __init__(self, strategy: str = "round_robin"):
        self._strategy = strategy
        self._current_index = 0
        self._instances: List[ServiceInstance] = []
    
    def update_instances(self, instances: List[ServiceInstance]) -> None:
        """更新实例列表"""
        self._instances = [i for i in instances if i.healthy]
    
    def select(self) -> Optional[ServiceInstance]:
        """选择实例"""
        if not self._instances:
            return None
        
        if self._strategy == "round_robin":
            instance = self._instances[self._current_index]
            self._current_index = (self._current_index + 1) % len(self._instances)
            return instance
        
        elif self._strategy == "random":
            return random.choice(self._instances)
        
        return self._instances[0]


class ServiceClient:
    """
    服务客户端
    集成服务发现与负载均衡
    """
    
    def __init__(self, service_name: str, registry: ConsulRegistry):
        self._service_name = service_name
        self._registry = registry
        self._load_balancer = LoadBalancer(strategy="round_robin")
        self._session: Optional[ClientSession] = None
    
    async def __aenter__(self):
        self._session = ClientSession()
        await self._refresh_instances()
        return self
    
    async def __aexit__(self, exc_type, exc_val, exc_tb):
        if self._session:
            await self._session.close()
    
    async def _refresh_instances(self) -> None:
        """刷新服务实例列表"""
        instances = self._registry.discover(self._service_name)
        self._load_balancer.update_instances(instances)
    
    async def request(self, method: str, path: str, **kwargs) -> Dict:
        """
        发送HTTP请求
        自动进行服务发现和负载均衡
        """
        instance = self._load_balancer.select()
        if not instance:
            raise Exception(f"无可用服务实例: {self._service_name}")
        
        url = f"http://{instance.host}:{instance.port}{path}"
        
        async with self._session.request(method, url, **kwargs) as response:
            return await response.json()


# 健康检查端点(FastAPI)
from fastapi import FastAPI, status

app = FastAPI()

@app.get("/health")
def health_check():
    """Consul健康检查端点"""
    return {
        "status": "healthy",
        "service": "order-service",
        "timestamp": datetime.now().isoformat()
    }


if __name__ == "__main__":
    # 注册服务示例
    registry = ConsulRegistry()
    
    instance = ServiceInstance(
        id=f"order-service-{socket.gethostname()}",
        name="order-service",
        host=socket.gethostname(),
        port=8000,
        tags=["v1", "python"],
        metadata={"version": "1.0.0"}
    )
    
    if registry.register(instance):
        print(f"服务注册成功: {instance.id}")
    
    try:
        # 保持运行
        while True:
            time.sleep(10)
    except KeyboardInterrupt:
        registry.deregister(instance.id)
        print("服务已注销")

2.1.4 配置中心

复制代码
#!/usr/bin/env python3
"""
【2.1.4】配置中心:Etcd配置热更新与Python-watch集成
内容:实现分布式配置管理、热更新监听、配置版本控制
依赖:etcd3>=0.12, pydantic>=2.0, watchdog>=3.0
"""

from __future__ import annotations
import json
import os
from typing import Dict, Any, Optional, Callable, TypeVar, Generic
from dataclasses import dataclass, field
from abc import ABC, abstractmethod
from threading import Lock

import etcd3
from pydantic import BaseModel, Field


T = TypeVar('T', bound=BaseModel)


class Configuration(BaseModel):
    """配置基类"""
    version: str = "1.0.0"
    environment: str = "development"


class DatabaseConfiguration(Configuration):
    """数据库配置"""
    host: str = Field(default="localhost")
    port: int = Field(default=5432)
    pool_size: int = Field(default=10)
    timeout: int = Field(default=30)


class KafkaConfiguration(Configuration):
    """Kafka配置"""
    bootstrap_servers: str = Field(default="localhost:9092")
    topic_prefix: str = Field(default="order-system")
    consumer_group: str = Field(default="default-group")


class ConfigurationStore(ABC):
    """配置存储抽象"""
    
    @abstractmethod
    def get(self, key: str) -> Optional[str]:
        raise NotImplementedError
    
    @abstractmethod
    def put(self, key: str, value: str) -> bool:
        raise NotImplementedError
    
    @abstractmethod
    def watch(self, key: str, callback: Callable[[str, str], None]) -> None:
        raise NotImplementedError


class EtcdConfigurationStore(ConfigurationStore):
    """
    Etcd配置存储实现
    支持分布式配置与Watch机制
    """
    
    def __init__(self, host: str = "localhost", port: int = 2379):
        self._client = etcd3.client(host=host, port=port)
        self._watch_ids: Dict[str, Any] = {}
    
    def get(self, key: str) -> Optional[str]:
        """获取配置"""
        value, _ = self._client.get(key)
        return value.decode('utf-8') if value else None
    
    def put(self, key: str, value: str) -> bool:
        """存储配置"""
        try:
            self._client.put(key, value)
            return True
        except Exception:
            return False
    
    def watch(self, key: str, callback: Callable[[str, str], None]) -> None:
        """监听配置变化"""
        events_iterator, cancel = self._client.watch(key)
        self._watch_ids[key] = cancel
        
        def watch_thread():
            for event in events_iterator:
                if isinstance(event, etcd3.events.PutEvent):
                    callback(key, event.value.decode('utf-8'))
                elif isinstance(event, etcd3.events.DeleteEvent):
                    callback(key, None)
        
        import threading
        threading.Thread(target=watch_thread, daemon=True).start()
    
    def cancel_watch(self, key: str) -> None:
        """取消监听"""
        if key in self._watch_ids:
            self._watch_ids[key]()
            del self._watch_ids[key]


class ConfigurationManager(Generic[T]):
    """
    配置管理器
    实现配置热更新与本地缓存
    """
    
    def __init__(self, store: ConfigurationStore, config_class: Type[T], 
                 key_prefix: str = "/config"):
        self._store = store
        self._config_class = config_class
        self._key_prefix = key_prefix
        self._cache: Dict[str, T] = {}
        self._lock = Lock()
        self._callbacks: List[Callable[[T], None]] = []
    
    def load(self, service_name: str) -> T:
        """加载配置"""
        key = f"{self._key_prefix}/{service_name}"
        value = self._store.get(key)
        
        if value:
            config_data = json.loads(value)
            config = self._config_class(**config_data)
        else:
            # 使用默认配置
            config = self._config_class()
            self.save(service_name, config)
        
        with self._lock:
            self._cache[service_name] = config
        
        # 设置监听
        self._store.watch(key, self._on_config_change)
        
        return config
    
    def save(self, service_name: str, config: T) -> bool:
        """保存配置"""
        key = f"{self._key_prefix}/{service_name}"
        value = json.dumps(config.model_dump())
        return self._store.put(key, value)
    
    def _on_config_change(self, key: str, value: Optional[str]) -> None:
        """配置变更回调"""
        if not value:
            return
        
        service_name = key.split("/")[-1]
        config_data = json.loads(value)
        new_config = self._config_class(**config_data)
        
        with self._lock:
            self._cache[service_name] = new_config
        
        # 触发回调
        for callback in self._callbacks:
            callback(new_config)
    
    def get_cached(self, service_name: str) -> Optional[T]:
        """获取缓存配置"""
        with self._lock:
            return self._cache.get(service_name)
    
    def on_change(self, callback: Callable[[T], None]) -> None:
        """注册变更监听"""
        self._callbacks.append(callback)


# 使用示例
if __name__ == "__main__":
    etcd_store = EtcdConfigurationStore()
    
    # 数据库配置管理
    db_config_manager = ConfigurationManager(
        store=etcd_store,
        config_class=DatabaseConfiguration,
        key_prefix="/config/database"
    )
    
    # 加载订单服务数据库配置
    config = db_config_manager.load("order-service")
    print(f"当前数据库配置: {config.host}:{config.port}")
    
    # 注册变更监听
    def on_config_update(new_config: DatabaseConfiguration):
        print(f"配置已更新: {new_config}")
    
    db_config_manager.on_change(on_config_update)
    
    # 模拟配置更新(实际应由配置中心触发)
    new_config = DatabaseConfiguration(host="new-db-host", port=5433)
    db_config_manager.save("order-service", new_config)
    
    # 查看缓存
    import time
    time.sleep(1)
    cached = db_config_manager.get_cached("order-service")
    print(f"缓存配置: {cached}")

2.2.1 Kafka主题设计

复制代码
#!/usr/bin/env python3
"""
【2.2.1】Kafka主题设计:order-events、inventory-events分区策略与Key选择
内容:实现主题创建、分区策略设计、消息键路由
依赖:kafka-python>=2.0, confluent-kafka>=2.0
"""

from __future__ import annotations
import json
import hashlib
from typing import Dict, List, Optional, Callable, Any
from dataclasses import dataclass
from enum import Enum

from kafka import KafkaAdminClient
from kafka.admin import NewTopic
from kafka.partitioner import DefaultPartitioner


class TopicConfig:
    """Kafka主题配置"""
    
    ORDER_EVENTS = "order-events"
    INVENTORY_EVENTS = "inventory-events"
    PAYMENT_EVENTS = "payment-events"
    DLQ_EVENTS = "order-events-dlq"
    
    # 分区数与副本因子
    DEFAULT_PARTITIONS = 12
    DEFAULT_REPLICATION = 3


class PartitionStrategy(Enum):
    """分区策略"""
    DEFAULT = "default"
    HASH_KEY = "hash_key"  # 基于Key哈希
    ROUND_ROBIN = "round_robin"
    CUSTOM = "custom"


class MessageKeyStrategy:
    """
    消息键策略
    确保相关消息进入同一分区,保证顺序性
    """
    
    @staticmethod
    def by_order_id(order_id: str) -> str:
        """按订单ID分区 - 保证同一订单的消息有序"""
        return order_id
    
    @staticmethod
    def by_customer_id(customer_id: str) -> str:
        """按客户ID分区 - 保证同一客户的消息有序"""
        return customer_id
    
    @staticmethod
    def by_sku(sku: str) -> str:
        """按SKU分区 - 保证同一商品库存事件有序"""
        return sku
    
    @staticmethod
    def partition_key(event_type: str, entity_id: str) -> str:
        """组合分区键"""
        return f"{event_type}:{entity_id}"


class CustomPartitioner:
    """
    自定义分区器
    实现基于业务键的一致性哈希
    """
    
    def __init__(self, num_partitions: int = 12):
        self._num_partitions = num_partitions
    
    def partition(self, topic: str, key: Optional[bytes], 
                  all_partitions: List[int], available: List[int]) -> int:
        """
        计算分区
        使用一致性哈希确保相同Key进入同一分区
        """
        if key is None:
            # 无Key时轮询
            return available[0] if available else 0
        
        # 计算Key哈希
        key_hash = int(hashlib.md5(key).hexdigest(), 16)
        partition = key_hash % len(all_partitions)
        
        # 确保分区可用
        if partition not in available:
            partition = available[partition % len(available)]
        
        return partition


class TopicManager:
    """
    Kafka主题管理器
    负责主题创建、配置管理、分区重分配
    """
    
    def __init__(self, bootstrap_servers: str):
        self._admin = KafkaAdminClient(bootstrap_servers=bootstrap_servers)
    
    def create_topics(self) -> None:
        """创建所有业务主题"""
        topics = [
            NewTopic(
                name=TopicConfig.ORDER_EVENTS,
                num_partitions=TopicConfig.DEFAULT_PARTITIONS,
                replication_factor=TopicConfig.DEFAULT_REPLICATION,
                topic_configs={
                    "cleanup.policy": "delete",
                    "retention.ms": "604800000",  # 7天
                    "min.insync.replicas": "2"
                }
            ),
            NewTopic(
                name=TopicConfig.INVENTORY_EVENTS,
                num_partitions=6,  # 库存事件较少,6个分区足够
                replication_factor=TopicConfig.DEFAULT_REPLICATION,
                topic_configs={
                    "cleanup.policy": "compact",  # 日志压缩,保留最新状态
                    "retention.ms": "86400000",   # 1天
                    "min.insync.replicas": "2"
                }
            ),
            NewTopic(
                name=TopicConfig.PAYMENT_EVENTS,
                num_partitions=TopicConfig.DEFAULT_PARTITIONS,
                replication_factor=TopicConfig.DEFAULT_REPLICATION
            ),
            NewTopic(
                name=TopicConfig.DLQ_EVENTS,
                num_partitions=3,
                replication_factor=TopicConfig.DEFAULT_REPLICATION,
                topic_configs={
                    "retention.ms": "2592000000"  # 30天,长期保留死信
                }
            )
        ]
        
        try:
            self._admin.create_topics(topics, validate_only=False)
            print("主题创建成功")
        except Exception as e:
            print(f"主题创建失败(可能已存在): {e}")
    
    def describe_topic(self, topic: str) -> Dict:
        """查看主题详情"""
        return self._admin.describe_topics([topic])
    
    def increase_partitions(self, topic: str, new_partition_count: int) -> None:
        """增加分区数(只能增加,不能减少)"""
        self._admin.create_partitions({topic: new_partition_count})


class EventEnvelope:
    """
    事件信封
    标准化事件格式,包含分区路由信息
    """
    
    def __init__(self, event_type: str, aggregate_id: str, 
                 payload: Dict[str, Any], timestamp: Optional[str] = None):
        self.event_type = event_type
        self.aggregate_id = aggregate_id
        self.payload = payload
        self.timestamp = timestamp or datetime.now().isoformat()
        self.partition_key = MessageKeyStrategy.by_order_id(aggregate_id)
    
    def to_json(self) -> str:
        return json.dumps({
            "event_type": self.event_type,
            "aggregate_id": self.aggregate_id,
            "payload": self.payload,
            "timestamp": self.timestamp,
            "partition_key": self.partition_key
        })
    
    @classmethod
    def from_json(cls, data: str) -> "EventEnvelope":
        obj = json.loads(data)
        return cls(
            event_type=obj["event_type"],
            aggregate_id=obj["aggregate_id"],
            payload=obj["payload"],
            timestamp=obj["timestamp"]
        )


# 使用示例
if __name__ == "__main__":
    from datetime import datetime
    
    # 创建主题
    manager = TopicManager("localhost:9092")
    manager.create_topics()
    
    # 创建事件
    envelope = EventEnvelope(
        event_type="OrderCreated",
        aggregate_id="ORD-2024-001",
        payload={"customer_id": "CUST-001", "total": 199.99}
    )
    
    print(f"事件信封: {envelope.to_json()}")
    print(f"分区键: {envelope.partition_key}")

2.2.2 幂等生产者

复制代码
#!/usr/bin/env python3
"""
【2.2.2】生产者端:幂等生产者配置与事务消息发送(EOS语义)
内容:实现Kafka幂等生产者、事务消息、恰好一次语义
依赖:confluent-kafka>=2.0
"""

from __future__ import annotations
import json
from typing import Dict, List, Optional, Callable
from dataclasses import dataclass
from contextlib import contextmanager

from confluent_kafka import Producer, KafkaError
from confluent_kafka.admin import AdminClient


@dataclass
class ProducerConfig:
    """生产者配置"""
    bootstrap_servers: str
    client_id: str
    transactional_id: Optional[str] = None
    
    def to_dict(self) -> Dict:
        config = {
            'bootstrap.servers': self.bootstrap_servers,
            'client.id': self.client_id,
            'acks': 'all',  # 等待所有ISR确认
            'retries': 10,  # 失败重试
            'retry.backoff.ms': 1000,
            'enable.idempotence': True,  # 启用幂等性
            'max.in.flight.requests.per.connection': 5,  # 允许5个未确认请求
            'compression.type': 'lz4',  # 压缩
            'batch.size': 16384,
            'linger.ms': 5,
            'delivery.timeout.ms': 120000,
        }
        
        if self.transactional_id:
            config['transactional.id'] = self.transactional_id
        
        return config


class IdempotentProducer:
    """
    幂等生产者
    确保消息恰好一次写入(Exactly Once Semantics at Producer)
    """
    
    def __init__(self, config: ProducerConfig):
        self._config = config
        self._producer = Producer(config.to_dict())
        self._transactional = config.transactional_id is not None
        
        if self._transactional:
            self._producer.init_transactions()
    
    def produce(self, topic: str, key: Optional[str], value: str, 
                headers: Optional[Dict] = None,
                on_delivery: Optional[Callable] = None) -> None:
        """
        发送消息(异步)
        幂等性保证:相同消息多次发送仅写入一次
        """
        try:
            self._producer.produce(
                topic=topic,
                key=key.encode('utf-8') if key else None,
                value=value.encode('utf-8'),
                headers=headers,
                callback=on_delivery
            )
        except KafkaError as e:
            if e.code() == KafkaError._QUEUE_FULL:
                # 队列满时等待并重试
                self._producer.poll(1)
                self.produce(topic, key, value, headers, on_delivery)
            else:
                raise
    
    def flush(self, timeout: float = 30.0) -> int:
        """刷新缓冲区,等待所有消息确认"""
        return self._producer.flush(timeout)
    
    def close(self) -> None:
        """关闭生产者"""
        self._producer.flush()
        self._producer.close()


class TransactionalProducer(IdempotentProducer):
    """
    事务生产者
    实现跨分区事务(EOS - Exactly Once Semantics)
    支持生产-消费事务(Consume-Transform-Produce)
    """
    
    def __init__(self, config: ProducerConfig):
        if not config.transactional_id:
            raise ValueError("事务生产者必须设置transactional_id")
        super().__init__(config)
    
    @contextmanager
    def transaction(self):
        """
        事务上下文管理器
        确保原子性:要么全部提交,要么全部回滚
        """
        try:
            self._producer.begin_transaction()
            yield self
            self._producer.commit_transaction()
        except Exception as e:
            self._producer.abort_transaction()
            raise Exception(f"事务失败已回滚: {e}")
    
    def send_offsets_to_transaction(self, consumer, partitions) -> None:
        """
        发送消费偏移量到事务
        实现Consume-Transform-Produce原子性
        """
        self._producer.send_offsets_to_transaction(
            partitions,
            consumer.consumer_group_metadata()
        )
    
    def produce_in_transaction(self, topic: str, key: Optional[str], 
                               value: str) -> None:
        """在事务内发送消息"""
        if not self._producer:
            raise RuntimeError("不在事务上下文中")
        
        self._producer.produce(
            topic=topic,
            key=key.encode('utf-8') if key else None,
            value=value.encode('utf-8')
        )


class SagaOrchestratorProducer(TransactionalProducer):
    """
    Saga编排器专用生产者
    确保Saga命令消息可靠传递
    """
    
    def send_saga_command(self, saga_id: str, step: int, 
                          command_type: str, payload: Dict) -> None:
        """
        发送Saga命令(事务内)
        保证命令不丢失、不重复
        """
        message = {
            "saga_id": saga_id,
            "step": step,
            "command_type": command_type,
            "payload": payload,
            "timestamp": datetime.now().isoformat()
        }
        
        # 使用Saga ID作为Key,确保同一Saga的消息有序
        self.produce_in_transaction(
            topic="saga-commands",
            key=saga_id,
            value=json.dumps(message)
        )


# 使用示例
if __name__ == "__main__":
    from datetime import datetime
    
    # 普通幂等生产者(非事务)
    config = ProducerConfig(
        bootstrap_servers="localhost:9092",
        client_id="order-service-producer"
    )
    
    producer = IdempotentProducer(config)
    
    # 发送订单事件
    for i in range(10):
        event = {
            "event_type": "OrderCreated",
            "order_id": f"ORD-{i:04d}",
            "timestamp": datetime.now().isoformat()
        }
        
        def delivery_report(err, msg):
            if err:
                print(f"消息发送失败: {err}")
            else:
                print(f"消息已发送: {msg.topic()}[{msg.partition()}] @ {msg.offset()}")
        
        producer.produce(
            topic="order-events",
            key=f"ORD-{i:04d}",
            value=json.dumps(event),
            on_delivery=delivery_report
        )
    
    producer.flush()
    
    # 事务生产者示例
    tx_config = ProducerConfig(
        bootstrap_servers="localhost:9092",
        client_id="saga-orchestrator",
        transactional_id="saga-orchestrator-1"
    )
    
    tx_producer = TransactionalProducer(tx_config)
    
    with tx_producer.transaction():
        # 发送多个消息,原子性保证
        tx_producer.produce_in_transaction(
            topic="inventory-commands",
            key="SKU-001",
            value=json.dumps({"action": "reserve", "qty": 10})
        )
        tx_producer.produce_in_transaction(
            topic="payment-commands", 
            key="PAY-001",
            value=json.dumps({"action": "charge", "amount": 199.99})
        )
    
    print("事务提交成功")

2.2.3 消费者组管理

复制代码
#!/usr/bin/env python3
"""
【2.2.3】消费者组管理:订单服务消费库存事件、手动提交偏移量策略
内容:实现Kafka消费者组、手动偏移提交、再均衡监听
依赖:confluent-kafka>=2.0
"""

from __future__ import annotations
import json
import signal
import sys
from typing import Dict, List, Callable, Optional, Any
from dataclasses import dataclass
from abc import ABC, abstractmethod
from threading import Lock

from confluent_kafka import Consumer, KafkaError, TopicPartition


@dataclass
class ConsumerConfig:
    """消费者配置"""
    bootstrap_servers: str
    group_id: str
    topics: List[str]
    auto_commit: bool = False  # 手动提交
    max_poll_records: int = 500
    session_timeout_ms: int = 30000
    heartbeat_interval_ms: int = 10000
    
    def to_dict(self) -> Dict:
        return {
            'bootstrap.servers': self.bootstrap_servers,
            'group.id': self.group_id,
            'enable.auto.commit': self.auto_commit,
            'auto.offset.reset': 'earliest',
            'max.poll.interval.ms': 300000,
            'session.timeout.ms': self.session_timeout_ms,
            'heartbeat.interval.ms': self.heartbeat_interval_ms,
            'max.partition.fetch.bytes': 1048576 * 10,  # 10MB
        }


class MessageHandler(ABC):
    """消息处理器接口"""
    
    @abstractmethod
    def handle(self, message: Dict[str, Any]) -> bool:
        """
        处理消息
        返回True表示处理成功,False表示失败(应重试)
        """
        raise NotImplementedError


class InventoryEventHandler(MessageHandler):
    """库存事件处理器"""
    
    def handle(self, message: Dict[str, Any]) -> bool:
        event_type = message.get('event_type')
        payload = message.get('payload')
        
        if event_type == "InventoryReserved":
            print(f"处理库存预留事件: {payload}")
            # 业务逻辑:更新订单状态为"库存已预留"
            return True
        elif event_type == "InsufficientInventory":
            print(f"处理库存不足事件: {payload}")
            # 业务逻辑:触发补偿(取消订单)
            return True
        
        return True


class ManualCommitConsumer:
    """
    手动提交消费者
    确保消息处理完成后再提交偏移量(At-Least-Once + 幂等性)
    """
    
    def __init__(self, config: ConsumerConfig, 
                 message_handler: MessageHandler):
        self._config = config
        self._handler = message_handler
        self._consumer = Consumer(config.to_dict())
        self._running = False
        self._lock = Lock()
        self._offsets_to_commit: Dict[tuple, int] = {}  # (topic, partition) -> offset
    
    def subscribe(self) -> None:
        """订阅主题并设置再均衡监听"""
        def on_rebalance(consumer, partitions):
            """再均衡回调"""
            print(f"再均衡触发: {partitions}")
            # 提交当前偏移量
            self._commit_pending_offsets()
        
        self._consumer.subscribe(self._config.topics, on_assign=on_rebalance)
    
    def start(self) -> None:
        """启动消费循环"""
        self._running = True
        self.subscribe()
        
        # 信号处理
        signal.signal(signal.SIGINT, self._signal_handler)
        signal.signal(signal.SIGTERM, self._signal_handler)
        
        print(f"消费者启动: group={self._config.group_id}, topics={self._config.topics}")
        
        try:
            while self._running:
                # 轮询消息
                msg = self._consumer.poll(timeout=1.0)
                
                if msg is None:
                    continue
                
                if msg.error():
                    self._handle_error(msg.error())
                    continue
                
                # 处理消息
                success = self._process_message(msg)
                
                if success:
                    # 记录待提交偏移量
                    with self._lock:
                        key = (msg.topic(), msg.partition())
                        self._offsets_to_commit[key] = msg.offset() + 1
                    
                    # 定期提交(每处理100条或每5秒)
                    if len(self._offsets_to_commit) >= 100:
                        self._commit_pending_offsets()
                else:
                    # 处理失败,不提交偏移量,下次重试
                    print(f"消息处理失败,将重试: {msg.offset()}")
                    
        finally:
            self._commit_pending_offsets()
            self._consumer.close()
    
    def _process_message(self, msg) -> bool:
        """
        处理单条消息
        包含异常捕获和死信队列逻辑
        """
        try:
            value = json.loads(msg.value().decode('utf-8'))
            print(f"收到消息: {msg.topic()}[{msg.partition()}] @ {msg.offset()}")
            
            return self._handler.handle(value)
            
        except json.JSONDecodeError as e:
            print(f"消息格式错误: {e}")
            return True  # 格式错误的消息应跳过(进入死信队列)
        except Exception as e:
            print(f"处理异常: {e}")
            return False
    
    def _commit_pending_offsets(self) -> None:
        """提交待处理的偏移量"""
        with self._lock:
            if not self._offsets_to_commit:
                return
            
            partitions = [
                TopicPartition(topic, partition, offset)
                for (topic, partition), offset in self._offsets_to_commit.items()
            ]
            
            self._consumer.commit(offsets=partitions, asynchronous=False)
            print(f"偏移量已提交: {len(partitions)} 个分区")
            self._offsets_to_commit.clear()
    
    def _handle_error(self, error: KafkaError) -> None:
        """处理消费错误"""
        if error.code() == KafkaError._PARTITION_EOF:
            print(f"分区末尾: {error}")
        elif error.code() == KafkaError._ALL_BROKERS_DOWN:
            print("所有Broker不可用")
            self._running = False
        else:
            print(f"消费错误: {error}")
    
    def _signal_handler(self, signum, frame):
        """信号处理"""
        print(f"收到信号 {signum}, 正在关闭...")
        self._running = False
    
    def pause(self, partitions: List[TopicPartition]) -> None:
        """暂停消费指定分区(背压控制)"""
        self._consumer.pause(partitions)
    
    def resume(self, partitions: List[TopicPartition]) -> None:
        """恢复消费"""
        self._consumer.resume(partitions)


class MultiTopicConsumer:
    """
    多主题消费者
    支持不同主题使用不同处理器
    """
    
    def __init__(self, config: ConsumerConfig):
        self._config = config
        self._handlers: Dict[str, MessageHandler] = {}
        self._consumer = Consumer(config.to_dict())
    
    def register_handler(self, topic: str, handler: MessageHandler) -> None:
        """为主题注册处理器"""
        self._handlers[topic] = handler
    
    def start(self) -> None:
        """启动多主题消费"""
        self._consumer.subscribe(self._config.topics)
        
        while True:
            msg = self._consumer.poll(timeout=1.0)
            if msg is None or msg.error():
                continue
            
            topic = msg.topic()
            handler = self._handlers.get(topic)
            
            if handler:
                value = json.loads(msg.value().decode('utf-8'))
                success = handler.handle(value)
                
                if success:
                    # 异步提交提升性能
                    self._consumer.commit(asynchronous=True)


# 使用示例
if __name__ == "__main__":
    config = ConsumerConfig(
        bootstrap_servers="localhost:9092",
        group_id="order-service-inventory-consumers",
        topics=["inventory-events"],
        auto_commit=False
    )
    
    handler = InventoryEventHandler()
    consumer = ManualCommitConsumer(config, handler)
    consumer.start()

2.2.4 Schema Registry

复制代码
#!/usr/bin/env python3
"""
【2.2.4】Schema Registry:Avro格式定义与向前兼容演化规则
内容:实现Avro模式管理、兼容性检查、序列化/反序列化
依赖:confluent-kafka[avro]>=2.0, fastavro>=1.8
"""

from __future__ import annotations
import json
import io
from typing import Dict, Any, Optional, List
from dataclasses import dataclass
from enum import Enum

import fastavro
import fastavro.schema
from fastavro import parse_schema, schemaless_writer, schemaless_reader


class CompatibilityMode(Enum):
    """兼容性模式"""
    NONE = "NONE"
    BACKWARD = "BACKWARD"  # 向后兼容:新代码可读旧数据
    FORWARD = "FORWARD"    # 向前兼容:旧代码可读新数据
    FULL = "FULL"          # 完全兼容


@dataclass
class SchemaVersion:
    """模式版本"""
    subject: str
    version: int
    schema_id: int
    schema: Dict[str, Any]
    compatibility: CompatibilityMode


class LocalSchemaRegistry:
    """
    本地模式注册表(简化实现)
    生产环境应使用Confluent Schema Registry
    """
    
    def __init__(self):
        self._schemas: Dict[str, List[SchemaVersion]] = {}
        self._schema_ids: Dict[int, Dict[str, Any]] = {}
        self._next_id = 1
    
    def register(self, subject: str, schema: Dict[str, Any], 
                 compatibility: CompatibilityMode = CompatibilityMode.BACKWARD) -> int:
        """
        注册新模式
        返回schema_id
        """
        # 检查兼容性
        if subject in self._schemas:
            versions = self._schemas[subject]
            last_version = versions[-1]
            
            if not self._check_compatibility(last_version.schema, schema, compatibility):
                raise ValueError(f"模式不兼容: {subject}")
        
        schema_id = self._next_id
        self._next_id += 1
        
        version = len(self._schemas.get(subject, [])) + 1
        
        schema_version = SchemaVersion(
            subject=subject,
            version=version,
            schema_id=schema_id,
            schema=schema,
            compatibility=compatibility
        )
        
        if subject not in self._schemas:
            self._schemas[subject] = []
        
        self._schemas[subject].append(schema_version)
        self._schema_ids[schema_id] = schema
        
        return schema_id
    
    def get_schema(self, subject: str, version: Optional[int] = None) -> SchemaVersion:
        """获取模式"""
        if subject not in self._schemas:
            raise ValueError(f"未找到模式: {subject}")
        
        versions = self._schemas[subject]
        if version is None:
            return versions[-1]
        
        return versions[version - 1]
    
    def get_schema_by_id(self, schema_id: int) -> Dict[str, Any]:
        """通过ID获取模式"""
        return self._schema_ids.get(schema_id)
    
    def _check_compatibility(self, old_schema: Dict, new_schema: Dict, 
                            mode: CompatibilityMode) -> bool:
        """
        兼容性检查
        BACKWARD: 新字段必须有默认值(旧代码忽略新字段)
        FORWARD: 不能删除必填字段(新代码可处理旧数据)
        """
        if mode == CompatibilityMode.NONE:
            return True
        
        old_fields = {f["name"]: f for f in old_schema.get("fields", [])}
        new_fields = {f["name"]: f for f in new_schema.get("fields", [])}
        
        if mode in [CompatibilityMode.BACKWARD, CompatibilityMode.FULL]:
            # 检查新字段是否有默认值
            for field_name, field in new_fields.items():
                if field_name not in old_fields:
                    if "default" not in field:
                        print(f"向后兼容失败: 新字段 {field_name} 无默认值")
                        return False
        
        if mode in [CompatibilityMode.FORWARD, CompatibilityMode.FULL]:
            # 检查是否删除了必填字段
            for field_name, field in old_fields.items():
                if field_name not in new_fields:
                    if "default" not in field:
                        print(f"向前兼容失败: 删除了必填字段 {field_name}")
                        return False
        
        return True


class AvroSerializer:
    """
    Avro序列化器
    支持模式演化的二进制序列化
    """
    
    def __init__(self, registry: LocalSchemaRegistry):
        self._registry = registry
    
    def encode(self, subject: str, data: Dict[str, Any], 
               version: Optional[int] = None) -> bytes:
        """
        编码数据
        格式: [magic byte][schema id][avro payload]
        """
        schema_version = self._registry.get_schema(subject, version)
        parsed = parse_schema(schema_version.schema)
        
        buf = io.BytesIO()
        # Magic byte
        buf.write(b'\x00')
        # Schema ID (4 bytes, big-endian)
        buf.write(schema_version.schema_id.to_bytes(4, byteorder='big'))
        # Avro data
        schemaless_writer(buf, parsed, data)
        
        return buf.getvalue()
    
    def decode(self, data: bytes) -> Dict[str, Any]:
        """
        解码数据
        自动根据schema_id获取对应模式
        """
        if len(data) < 5:
            raise ValueError("数据格式错误")
        
        if data[0] != 0:
            raise ValueError("不支持的magic byte")
        
        schema_id = int.from_bytes(data[1:5], byteorder='big')
        schema = self._registry.get_schema_by_id(schema_id)
        
        if not schema:
            raise ValueError(f"未知schema_id: {schema_id}")
        
        parsed = parse_schema(schema)
        buf = io.BytesIO(data[5:])
        
        return schemaless_reader(buf, parsed)


# 预定义Avro模式
ORDER_EVENT_SCHEMA_V1 = {
    "type": "record",
    "name": "OrderEvent",
    "namespace": "com.example.order",
    "fields": [
        {"name": "order_id", "type": "string"},
        {"name": "customer_id", "type": "string"},
        {"name": "total_amount", "type": "double"},
        {"name": "status", "type": "string"},
        {"name": "created_at", "type": "string"}
    ]
}

# V2: 添加可选字段(向后兼容)
ORDER_EVENT_SCHEMA_V2 = {
    "type": "record",
    "name": "OrderEvent",
    "namespace": "com.example.order",
    "fields": [
        {"name": "order_id", "type": "string"},
        {"name": "customer_id", "type": "string"},
        {"name": "total_amount", "type": "double"},
        {"name": "status", "type": "string"},
        {"name": "created_at", "type": "string"},
        {"name": "coupon_code", "type": ["null", "string"], "default": None}  # 新增可选字段
    ]
}


if __name__ == "__main__":
    # 创建注册表
    registry = LocalSchemaRegistry()
    serializer = AvroSerializer(registry)
    
    # 注册V1模式
    v1_id = registry.register("order-events-value", ORDER_EVENT_SCHEMA_V1)
    print(f"注册V1模式, ID: {v1_id}")
    
    # 注册V2模式(向后兼容)
    v2_id = registry.register("order-events-value", ORDER_EVENT_SCHEMA_V2, 
                              CompatibilityMode.BACKWARD)
    print(f"注册V2模式, ID: {v2_id}")
    
    # 序列化V2数据
    event_v2 = {
        "order_id": "ORD-001",
        "customer_id": "CUST-001",
        "total_amount": 199.99,
        "status": "CREATED",
        "created_at": "2024-01-01T00:00:00Z",
        "coupon_code": "SAVE10"
    }
    
    encoded = serializer.encode("order-events-value", event_v2, version=2)
    print(f"序列化后大小: {len(encoded)} bytes")
    
    # 反序列化(自动识别版本)
    decoded = serializer.decode(encoded)
    print(f"反序列化结果: {decoded}")

2.3.1 Saga模式(Orchestration)

复制代码
#!/usr/bin/env python3
"""
【2.3.1】Saga模式:订单-库存-支付编排型Saga实现(Orchestration)
内容:实现Saga编排器、状态机持久化、分布式事务协调
依赖:redis>=4.5, sqlalchemy>=2.0, tenacity>=8.0
"""

from __future__ import annotations
import uuid
import json
from typing import Dict, List, Optional, Callable, Any, Type
from dataclasses import dataclass, field, asdict
from datetime import datetime
from enum import Enum, auto
from abc import ABC, abstractmethod

import redis
from sqlalchemy import create_engine, Column, String, JSON, DateTime, Integer
from sqlalchemy.orm import declarative_base, sessionmaker


Base = declarative_base()


class SagaStatus(Enum):
    """Saga状态"""
    PENDING = "PENDING"
    RUNNING = "RUNNING"
    COMPENSATING = "COMPENSATING"
    COMPLETED = "COMPLETED"
    FAILED = "FAILED"


class SagaStepStatus(Enum):
    """Saga步骤状态"""
    PENDING = "PENDING"
    EXECUTING = "EXECUTING"
    SUCCEEDED = "SUCCEEDED"
    FAILED = "FAILED"
    COMPENSATING = "COMPENSATING"
    COMPENSATED = "COMPENSATED"


@dataclass
class SagaStep:
    """Saga步骤定义"""
    step_id: str
    name: str
    service: str
    action: str  # 正向操作
    compensation: Optional[str] = None  # 补偿操作
    status: SagaStepStatus = field(default=SagaStepStatus.PENDING)
    input_data: Dict = field(default_factory=dict)
    output_data: Dict = field(default_factory=dict)
    error_message: Optional[str] = None


class SagaInstance(Base):
    """Saga实例持久化模型"""
    __tablename__ = "saga_instances"
    
    id = Column(String(36), primary_key=True)
    saga_type = Column(String(100), nullable=False)
    status = Column(String(20), nullable=False)
    current_step = Column(Integer, default=0)
    steps = Column(JSON, default=list)
    context = Column(JSON, default=dict)  # 共享上下文
    started_at = Column(DateTime, default=datetime.now)
    completed_at = Column(DateTime, nullable=True)
    error_message = Column(String(500), nullable=True)


class CommandPublisher(ABC):
    """命令发布接口"""
    
    @abstractmethod
    def publish(self, service: str, command: str, payload: Dict, 
                correlation_id: str) -> bool:
        raise NotImplementedError


class KafkaCommandPublisher(CommandPublisher):
    """Kafka命令发布实现"""
    
    def publish(self, service: str, command: str, payload: Dict, 
                correlation_id: str) -> bool:
        # 实际应使用KafkaProducer
        print(f"[Kafka] 发送命令到 {service}: {command}, correlation={correlation_id}")
        return True


class SagaOrchestrator:
    """
    Saga编排器(Orchestration-based Saga)
    集中式协调分布式事务
    """
    
    def __init__(self, db_url: str, redis_client: redis.Redis,
                 command_publisher: CommandPublisher):
        self._engine = create_engine(db_url)
        Base.metadata.create_all(self._engine)
        self._session = sessionmaker(bind=self._engine)
        self._redis = redis_client
        self._publisher = command_publisher
        self._saga_definitions: Dict[str, List[SagaStep]] = {}
    
    def define_saga(self, saga_type: str, steps: List[SagaStep]) -> None:
        """定义Saga流程"""
        self._saga_definitions[saga_type] = steps
    
    def start_saga(self, saga_type: str, initial_context: Dict) -> str:
        """
        启动Saga实例
        返回Saga ID
        """
        if saga_type not in self._saga_definitions:
            raise ValueError(f"未定义的Saga类型: {saga_type}")
        
        saga_id = str(uuid.uuid4())
        steps_def = self._saga_definitions[saga_type]
        
        # 创建步骤实例
        steps = [
            SagaStep(
                step_id=str(uuid.uuid4()),
                name=step.name,
                service=step.service,
                action=step.action,
                compensation=step.compensation,
                input_data={}
            )
            for step in steps_def
        ]
        
        # 持久化
        session = self._session()
        instance = SagaInstance(
            id=saga_id,
            saga_type=saga_type,
            status=SagaStatus.RUNNING.value,
            current_step=0,
            steps=[asdict(s) for s in steps],
            context=initial_context
        )
        session.add(instance)
        session.commit()
        session.close()
        
        # 开始执行第一步
        self._execute_step(saga_id, 0)
        
        return saga_id
    
    def _execute_step(self, saga_id: str, step_index: int) -> None:
        """执行指定步骤"""
        session = self._session()
        instance = session.query(SagaInstance).get(saga_id)
        
        if not instance or instance.status == SagaStatus.FAILED.value:
            session.close()
            return
        
        steps = [SagaStep(**s) for s in instance.steps]
        
        if step_index >= len(steps):
            # Saga完成
            instance.status = SagaStatus.COMPLETED.value
            instance.completed_at = datetime.now()
            session.commit()
            session.close()
            print(f"Saga {saga_id} 完成")
            return
        
        step = steps[step_index]
        step.status = SagaStepStatus.EXECUTING
        step.input_data = instance.context
        
        # 更新步骤状态
        instance.steps[step_index] = asdict(step)
        session.commit()
        
        # 发送命令到目标服务
        success = self._publisher.publish(
            service=step.service,
            command=step.action,
            payload={
                "saga_id": saga_id,
                "step_id": step.step_id,
                "step_index": step_index,
                "input": step.input_data
            },
            correlation_id=saga_id
        )
        
        if not success:
            self._handle_step_failure(saga_id, step_index, "命令发送失败")
        
        session.close()
    
    def handle_step_result(self, saga_id: str, step_index: int, 
                          success: bool, result: Dict, error: Optional[str] = None) -> None:
        """
        处理步骤执行结果(由服务回调)
        """
        session = self._session()
        instance = session.query(SagaInstance).get(saga_id)
        
        if not instance:
            session.close()
            return
        
        steps = [SagaStep(**s) for s in instance.steps]
        step = steps[step_index]
        
        if success:
            # 步骤成功,更新上下文,执行下一步
            step.status = SagaStepStatus.SUCCEEDED
            step.output_data = result
            instance.context.update(result)  # 合并结果到上下文
            
            instance.steps[step_index] = asdict(step)
            instance.current_step = step_index + 1
            session.commit()
            session.close()
            
            self._execute_step(saga_id, step_index + 1)
        else:
            # 步骤失败,触发补偿
            step.status = SagaStepStatus.FAILED
            step.error_message = error
            instance.steps[step_index] = asdict(step)
            session.commit()
            session.close()
            
            self._handle_step_failure(saga_id, step_index, error)
    
    def _handle_step_failure(self, saga_id: str, failed_step_index: int, 
                            error: str) -> None:
        """处理步骤失败,触发补偿"""
        session = self._session()
        instance = session.query(SagaInstance).get(saga_id)
        
        instance.status = SagaStatus.COMPENSATING.value
        instance.error_message = error
        session.commit()
        
        steps = [SagaStep(**s) for s in instance.steps]
        
        # 逆序补偿已完成的步骤
        for i in range(failed_step_index - 1, -1, -1):
            step = steps[i]
            if step.status == SagaStepStatus.SUCCEEDED and step.compensation:
                step.status = SagaStepStatus.COMPENSATING
                instance.steps[i] = asdict(step)
                session.commit()
                
                # 发送补偿命令
                self._publisher.publish(
                    service=step.service,
                    command=step.compensation,
                    payload={
                        "saga_id": saga_id,
                        "step_id": step.step_id,
                        "original_output": step.output_data
                    },
                    correlation_id=saga_id
                )
        
        instance.status = SagaStatus.FAILED.value
        instance.completed_at = datetime.now()
        session.commit()
        session.close()
        
        print(f"Saga {saga_id} 已失败并补偿完成")


# 使用示例:订单-库存-支付Saga
if __name__ == "__main__":
    redis_client = redis.Redis()
    publisher = KafkaCommandPublisher()
    orchestrator = SagaOrchestrator("sqlite:///saga.db", redis_client, publisher)
    
    # 定义Saga流程
    order_saga = [
        SagaStep(
            step_id="step-1",
            name="CreateOrder",
            service="order-service",
            action="create_order",
            compensation="cancel_order"
        ),
        SagaStep(
            step_id="step-2", 
            name="ReserveInventory",
            service="inventory-service",
            action="reserve_inventory",
            compensation="release_inventory"
        ),
        SagaStep(
            step_id="step-3",
            name="ProcessPayment", 
            service="payment-service",
            action="charge_payment",
            compensation="refund_payment"
        ),
        SagaStep(
            step_id="step-4",
            name="ConfirmOrder",
            service="order-service",
            action="confirm_order"
        )
    ]
    
    orchestrator.define_saga("order-processing", order_saga)
    
    # 启动Saga
    saga_id = orchestrator.start_saga("order-processing", {
        "customer_id": "CUST-001",
        "items": [{"sku": "SKU-001", "qty": 2, "price": 99.99}],
        "total": 199.98
    })
    
    print(f"Saga已启动: {saga_id}")

2.3.2 补偿事务设计

复制代码
#!/usr/bin/env python3
"""
【2.3.2】补偿事务设计:库存回滚逻辑与订单状态机逆向转换
内容:实现补偿操作幂等性、状态机逆向、补偿日志
依赖:sqlalchemy>=2.0, pydantic>=2.0
"""

from __future__ import annotations
import uuid
from typing import Dict, Optional, List, Callable, Any
from dataclasses import dataclass
from datetime import datetime
from enum import Enum
from abc import ABC, abstractmethod

from sqlalchemy import create_engine, Column, String, JSON, DateTime
from sqlalchemy.orm import declarative_base, sessionmaker


Base = declarative_base()


class CompensationStatus(Enum):
    """补偿状态"""
    PENDING = "PENDING"
    EXECUTING = "EXECUTING"
    SUCCEEDED = "SUCCEEDED"
    FAILED = "FAILED"
    SKIPPED = "SKIPPED"


@dataclass
class CompensationLogEntry:
    """补偿日志条目"""
    compensation_id: str
    saga_id: str
    step_name: str
    original_action: str
    compensation_action: str
    status: CompensationStatus
    input_data: Dict[str, Any]
    output_data: Dict[str, Any]
    error_message: Optional[str]
    created_at: datetime
    completed_at: Optional[datetime] = None


class CompensationLog(Base):
    """补偿日志持久化"""
    __tablename__ = "compensation_logs"
    
    id = Column(String(36), primary_key=True)
    saga_id = Column(String(36), nullable=False, index=True)
    step_name = Column(String(100), nullable=False)
    original_action = Column(String(100))
    compensation_action = Column(String(100))
    status = Column(String(20), nullable=False)
    input_data = Column(JSON)
    output_data = Column(JSON)
    error_message = Column(String(500))
    created_at = Column(DateTime, default=datetime.now)
    completed_at = Column(DateTime, nullable=True)


class CompensatableAction(ABC):
    """
    可补偿动作抽象
    每个正向操作必须定义对应的补偿操作
    """
    
    @abstractmethod
    def execute(self, context: Dict[str, Any]) -> Dict[str, Any]:
        """执行正向操作"""
        raise NotImplementedError
    
    @abstractmethod
    def compensate(self, execution_result: Dict[str, Any], 
                   context: Dict[str, Any]) -> bool:
        """
        执行补偿
        必须幂等:多次执行结果一致
        """
        raise NotImplementedError
    
    @abstractmethod
    def get_action_name(self) -> str:
        raise NotImplementedError


class OrderCreationAction(CompensatableAction):
    """订单创建动作与补偿"""
    
    def __init__(self, db_session):
        self._session = db_session
    
    def execute(self, context: Dict[str, Any]) -> Dict[str, Any]:
        order_id = str(uuid.uuid4())
        # 创建订单逻辑...
        return {"order_id": order_id, "status": "CREATED"}
    
    def compensate(self, execution_result: Dict[str, Any], 
                   context: Dict[str, Any]) -> bool:
        """补偿:取消订单(幂等操作)"""
        order_id = execution_result.get("order_id")
        
        # 检查订单是否已取消(幂等性检查)
        existing = self._session.query(Order).get(order_id)
        if existing and existing.status == "CANCELLED":
            return True  # 已补偿
        
        # 执行取消
        order = self._session.query(Order).get(order_id)
        if order:
            order.status = "CANCELLED"
            order.cancelled_at = datetime.now()
            order.cancellation_reason = "Saga compensation"
            self._session.commit()
        
        return True
    
    def get_action_name(self) -> str:
        return "CREATE_ORDER"


class InventoryReservationAction(CompensatableAction):
    """库存预留与补偿"""
    
    def __init__(self, redis_client):
        self._redis = redis_client
    
    def execute(self, context: Dict[str, Any]) -> Dict[str, Any]:
        sku = context["sku"]
        qty = context["quantity"]
        reservation_id = str(uuid.uuid4())
        
        # 扣减库存
        self._redis.hincrby(f"inventory:{sku}", "reserved", qty)
        self._redis.hset(f"reservation:{reservation_id}", mapping={
            "sku": sku,
            "qty": qty,
            "status": "ACTIVE"
        })
        
        return {"reservation_id": reservation_id, "sku": sku, "qty": qty}
    
    def compensate(self, execution_result: Dict[str, Any], 
                   context: Dict[str, Any]) -> bool:
        """
        补偿:释放库存(幂等)
        """
        reservation_id = execution_result.get("reservation_id")
        
        # 检查是否已释放(幂等性)
        status = self._redis.hget(f"reservation:{reservation_id}", "status")
        if status == b"RELEASED":
            return True
        
        # 释放库存
        sku = execution_result["sku"]
        qty = execution_result["qty"]
        
        self._redis.hincrby(f"inventory:{sku}", "reserved", -qty)
        self._redis.hset(f"reservation:{reservation_id}", "status", "RELEASED")
        
        return True
    
    def get_action_name(self) -> str:
        return "RESERVE_INVENTORY"


class CompensationManager:
    """
    补偿事务管理器
    协调补偿操作执行与日志记录
    """
    
    def __init__(self, db_url: str):
        self._engine = create_engine(db_url)
        Base.metadata.create_all(self._engine)
        self._session_factory = sessionmaker(bind=self._engine)
        self._actions: Dict[str, CompensatableAction] = {}
    
    def register_action(self, action: CompensatableAction) -> None:
        """注册可补偿动作"""
        self._actions[action.get_action_name()] = action
    
    def execute_compensation(self, saga_id: str, action_name: str,
                            original_result: Dict[str, Any],
                            context: Dict[str, Any]) -> bool:
        """
        执行补偿操作
        包含幂等性检查与日志记录
        """
        action = self._actions.get(action_name)
        if not action:
            raise ValueError(f"未找到动作: {action_name}")
        
        session = self._session_factory()
        
        # 检查是否已补偿(幂等性)
        existing = session.query(CompensationLog).filter_by(
            saga_id=saga_id,
            step_name=action_name,
            status=CompensationStatus.SUCCEEDED.value
        ).first()
        
        if existing:
            print(f"补偿已执行过,跳过: {action_name}")
            return True
        
        # 创建补偿日志
        comp_id = str(uuid.uuid4())
        log_entry = CompensationLog(
            id=comp_id,
            saga_id=saga_id,
            step_name=action_name,
            original_action=action_name,
            compensation_action=f"COMPENSATE_{action_name}",
            status=CompensationStatus.EXECUTING.value,
            input_data=context,
            output_data=original_result
        )
        session.add(log_entry)
        session.commit()
        
        try:
            # 执行补偿
            success = action.compensate(original_result, context)
            
            if success:
                log_entry.status = CompensationStatus.SUCCEEDED.value
                log_entry.completed_at = datetime.now()
            else:
                log_entry.status = CompensationStatus.FAILED.value
                log_entry.error_message = "补偿操作返回失败"
            
            session.commit()
            return success
            
        except Exception as e:
            log_entry.status = CompensationStatus.FAILED.value
            log_entry.error_message = str(e)
            session.commit()
            raise
        finally:
            session.close()
    
    def get_compensation_status(self, saga_id: str) -> List[CompensationLogEntry]:
        """获取Saga的补偿状态"""
        session = self._session_factory()
        logs = session.query(CompensationLog).filter_by(saga_id=saga_id).all()
        
        return [
            CompensationLogEntry(
                compensation_id=log.id,
                saga_id=log.saga_id,
                step_name=log.step_name,
                original_action=log.original_action,
                compensation_action=log.compensation_action,
                status=CompensationStatus(log.status),
                input_data=log.input_data,
                output_data=log.output_data,
                error_message=log.error_message,
                created_at=log.created_at,
                completed_at=log.completed_at
            )
            for log in logs
        ]


# Order模型(简化)
class Order(Base):
    __tablename__ = "orders"
    
    id = Column(String(36), primary_key=True)
    status = Column(String(20))
    cancelled_at = Column(DateTime, nullable=True)
    cancellation_reason = Column(String(200), nullable=True)


if __name__ == "__main__":
    import redis
    
    # 初始化
    manager = CompensationManager("sqlite:///compensation.db")
    redis_client = redis.Redis()
    
    # 注册动作
    manager.register_action(InventoryReservationAction(redis_client))
    
    # 模拟补偿
    result = manager.execute_compensation(
        saga_id="saga-001",
        action_name="RESERVE_INVENTORY",
        original_result={"reservation_id": "res-123", "sku": "SKU-001", "qty": 10},
        context={"sku": "SKU-001", "quantity": 10}
    )
    
    print(f"补偿执行结果: {'成功' if result else '失败'}")

2.3.3 幂等消费者

复制代码
#!/usr/bin/env python3
"""
【2.3.3】幂等消费者:基于唯一键去重与Redis幂等缓存
内容:实现幂等键生成、去重窗口、去重存储策略
依赖:redis>=4.5, mmh3>=4.0 (布隆过滤器)
"""

from __future__ import annotations
import hashlib
import json
from typing import Dict, Optional, Set, Callable, Any
from dataclasses import dataclass
from datetime import datetime, timedelta
from enum import Enum

import redis


class IdempotencyStrategy(Enum):
    """幂等策略"""
    EXACTLY_ONCE = "exactly_once"  # 恰好一次(严格幂等)
    AT_LEAST_ONCE = "at_least_once"  # 至少一次(宽松)


@dataclass(frozen=True)
class IdempotencyKey:
    """幂等键"""
    message_id: str
    consumer_group: str
    
    def to_string(self) -> str:
        return f"{self.consumer_group}:{self.message_id}"


class IdempotencyStore(ABC):
    """幂等存储抽象"""
    
    @abstractmethod
    def exists(self, key: IdempotencyKey) -> bool:
        """检查键是否存在"""
        raise NotImplementedError
    
    @abstractmethod
    def save(self, key: IdempotencyKey, result: Any, ttl: int = 86400) -> bool:
        """保存处理结果"""
        raise NotImplementedError
    
    @abstractmethod
    def get_result(self, key: IdempotencyKey) -> Optional[Any]:
        """获取已处理结果"""
        raise NotImplementedError


class RedisIdempotencyStore(IdempotencyStore):
    """
    Redis幂等存储实现
    使用SETNX实现分布式去重
    """
    
    def __init__(self, redis_client: redis.Redis, key_prefix: str = "idempotency"):
        self._redis = redis_client
        self._prefix = key_prefix
    
    def _build_key(self, key: IdempotencyKey) -> str:
        return f"{self._prefix}:{key.to_string()}"
    
    def exists(self, key: IdempotencyKey) -> bool:
        """检查是否已处理(SETNX原子操作)"""
        redis_key = self._build_key(key)
        # 尝试设置NX(仅不存在时设置)
        result = self._redis.set(redis_key, "PROCESSING", nx=True, ex=86400)
        return result is None  # None表示键已存在
    
    def save(self, key: IdempotencyKey, result: Any, ttl: int = 86400) -> bool:
        """保存处理结果"""
        redis_key = self._build_key(key)
        value = json.dumps({
            "status": "COMPLETED",
            "result": result,
            "processed_at": datetime.now().isoformat()
        })
        return self._redis.set(redis_key, value, ex=ttl)
    
    def get_result(self, key: IdempotencyKey) -> Optional[Any]:
        """获取已处理结果"""
        redis_key = self._build_key(key)
        value = self._redis.get(redis_key)
        if value:
            data = json.loads(value)
            return data.get("result")
        return None
    
    def mark_failed(self, key: IdempotencyKey, error: str, ttl: int = 3600) -> None:
        """标记处理失败(允许重试)"""
        redis_key = self._build_key(key)
        value = json.dumps({
            "status": "FAILED",
            "error": error,
            "failed_at": datetime.now().isoformat()
        })
        self._redis.set(redis_key, value, ex=ttl)


class BloomFilterIdempotencyStore(IdempotencyStore):
    """
    布隆过滤器幂等存储
    内存高效,允许少量误判(可接受场景)
    """
    
    def __init__(self, redis_client: redis.Redis, 
                 expected_items: int = 1000000, 
                 false_positive_rate: float = 0.001):
        self._redis = redis_client
        self._key = "idempotency:bloomfilter"
        self._expected_items = expected_items
        self._fp_rate = false_positive_rate
        # 初始化布隆过滤器
        self._redis.execute_command(
            "BF.RESERVE", 
            self._key, 
            self._fp_rate, 
            self._expected_items
        )
    
    def exists(self, key: IdempotencyKey) -> bool:
        """布隆过滤器检查(可能误判)"""
        return self._redis.execute_command(
            "BF.EXISTS", 
            self._key, 
            key.to_string()
        )
    
    def save(self, key: IdempotencyKey, result: Any, ttl: int = 86400) -> bool:
        """添加到布隆过滤器"""
        self._redis.execute_command("BF.ADD", self._key, key.to_string())
        return True
    
    def get_result(self, key: IdempotencyKey) -> Optional[Any]:
        """布隆过滤器不存储结果,需配合其他存储"""
        return None


class IdempotentConsumer:
    """
    幂等消费者装饰器
    包装消息处理器,提供自动去重
    """
    
    def __init__(self, 
                 store: IdempotencyStore, 
                 strategy: IdempotencyStrategy = IdempotencyStrategy.EXACTLY_ONCE,
                 id_extractor: Optional[Callable[[Any], str]] = None):
        self._store = store
        self._strategy = strategy
        self._id_extractor = id_extractor or self._default_id_extractor
        self._consumer_group = "default-group"
    
    def _default_id_extractor(self, message: Any) -> str:
        """默认ID提取:从消息中提取唯一标识"""
        if isinstance(message, dict):
            # 优先使用业务ID
            return message.get("message_id") or \
                   message.get("event_id") or \
                   message.get("id") or \
                   self._hash_message(message)
        return self._hash_message(message)
    
    def _hash_message(self, message: Any) -> str:
        """计算消息哈希"""
        content = json.dumps(message, sort_keys=True).encode()
        return hashlib.sha256(content).hexdigest()[:16]
    
    def set_consumer_group(self, group: str) -> None:
        """设置消费者组"""
        self._consumer_group = group
    
    def process(self, message: Any, handler: Callable[[Any], Any]) -> Any:
        """
        处理消息(幂等包装)
        """
        msg_id = self._id_extractor(message)
        key = IdempotencyKey(msg_id, self._consumer_group)
        
        # 检查是否已处理
        if self._store.exists(key):
            print(f"消息已处理,跳过: {msg_id}")
            
            if self._strategy == IdempotencyStrategy.EXACTLY_ONCE:
                # 返回之前的结果(缓存)
                cached = self._store.get_result(key)
                if cached is not None:
                    return cached
                else:
                    # 无缓存结果,需重新处理或等待
                    raise Exception(f"消息处理中或结果丢失: {msg_id}")
            else:
                # AT_LEAST_ONCE策略:直接跳过
                return None
        
        try:
            # 执行处理
            result = handler(message)
            
            # 保存结果
            self._store.save(key, result)
            
            return result
            
        except Exception as e:
            # 标记失败(允许重试)
            self._store.mark_failed(key, str(e))
            raise


class IdempotentKafkaConsumer:
    """
    Kafka幂等消费者
    结合消费者组与幂等键
    """
    
    def __init__(self, store: IdempotencyStore, consumer_group: str):
        self._store = store
        self._consumer_group = consumer_group
        self._idempotent_wrapper = IdempotentConsumer(store)
        self._idempotent_wrapper.set_consumer_group(consumer_group)
        self._handlers: Dict[str, Callable] = {}
    
    def register_handler(self, event_type: str, handler: Callable[[Any], Any]) -> None:
        """注册事件处理器"""
        self._handlers[event_type] = handler
    
    def handle_message(self, message: Dict[str, Any]) -> Any:
        """
        处理消息入口
        提取幂等键并去重
        """
        event_type = message.get("event_type")
        handler = self._handlers.get(event_type)
        
        if not handler:
            print(f"未找到处理器: {event_type}")
            return None
        
        # 幂等处理
        return self._idempotent_wrapper.process(message, handler)


# 使用示例
if __name__ == "__main__":
    redis_client = redis.Redis()
    store = RedisIdempotencyStore(redis_client)
    
    consumer = IdempotentKafkaConsumer(store, "order-service-consumers")
    
    def handle_inventory_reserved(message: Dict) -> Dict:
        print(f"处理库存预留: {message}")
        return {"status": "SUCCESS", "order_id": message.get("order_id")}
    
    consumer.register_handler("InventoryReserved", handle_inventory_reserved)
    
    # 模拟重复消息
    message = {
        "event_type": "InventoryReserved",
        "message_id": "msg-001",
        "order_id": "ORD-123",
        "sku": "SKU-001",
        "qty": 10
    }
    
    # 第一次处理
    result1 = consumer.handle_message(message)
    print(f"第一次处理结果: {result1}")
    
    # 重复处理(应被去重)
    result2 = consumer.handle_message(message)
    print(f"重复处理结果: {result2}")

2.3.4 死信队列(DLQ)

复制代码
#!/usr/bin/env python3
"""
【2.3.4】死信队列(DLQ):消费失败重试机制与人工介入告警
内容:实现重试策略、指数退避、死信处理、告警通知
依赖:tenacity>=8.0, kafka-python>=2.0, prometheus-client>=0.17
"""

from __future__ import annotations
import json
import time
from typing import Dict, Any, Optional, Callable, List
from dataclasses import dataclass, field
from datetime import datetime, timedelta
from enum import Enum

from tenacity import retry, stop_after_attempt, wait_exponential, retry_if_exception_type
from kafka import KafkaProducer, KafkaConsumer


class RetryPolicy(Enum):
    """重试策略"""
    IMMEDIATE = "immediate"  # 立即重试
    FIXED_DELAY = "fixed_delay"  # 固定延迟
    EXPONENTIAL_BACKOFF = "exponential_backoff"  # 指数退避
    NO_RETRY = "no_retry"  # 不重试


@dataclass
class FailedMessage:
    """失败消息"""
    original_topic: str
    partition: int
    offset: int
    key: Optional[str]
    value: Dict[str, Any]
    exception_type: str
    exception_message: str
    retry_count: int = 0
    failed_at: datetime = field(default_factory=datetime.now)
    last_retry_at: Optional[datetime] = None
    next_retry_at: Optional[datetime] = None


class DeadLetterQueue:
    """
    死信队列管理器
    管理重试队列与死信队列
    """
    
    RETRY_TOPICS = ["retry-1", "retry-2", "retry-3"]  # 三级重试
    DLQ_TOPIC = "dead-letter-queue"
    
    def __init__(self, bootstrap_servers: str):
        self._producer = KafkaProducer(
            bootstrap_servers=bootstrap_servers,
            value_serializer=lambda v: json.dumps(v).encode()
        )
        self._retry_delays = [60, 300, 1800]  # 1分钟、5分钟、30分钟
    
    def schedule_retry(self, message: FailedMessage) -> bool:
        """
        调度重试
        根据重试次数选择延迟级别
        """
        if message.retry_count >= len(self.RETRY_TOPICS):
            # 超过最大重试次数,进入DLQ
            self._send_to_dlq(message)
            return False
        
        retry_topic = self.RETRY_TOPICS[message.retry_count]
        delay_seconds = self._retry_delays[message.retry_count]
        
        # 计算下次重试时间
        message.retry_count += 1
        message.last_retry_at = datetime.now()
        message.next_retry_at = datetime.now() + timedelta(seconds=delay_seconds)
        
        # 发送到重试主题(实际应使用延迟队列或调度器)
        # 简化实现:直接发送,消费者根据next_retry_at过滤
        self._producer.send(
            retry_topic,
            key=message.key.encode() if message.key else None,
            value=self._serialize_failed_message(message)
        )
        
        print(f"消息调度至重试队列 {retry_topic}, 延迟 {delay_seconds} 秒")
        return True
    
    def _send_to_dlq(self, message: FailedMessage) -> None:
        """发送到死信队列"""
        self._producer.send(
            self.DLQ_TOPIC,
            key=message.key.encode() if message.key else None,
            value=self._serialize_failed_message(message)
        )
        
        # 触发告警
        self._alert_ops(message)
        
        print(f"消息进入死信队列: {message.exception_type}")
    
    def _serialize_failed_message(self, message: FailedMessage) -> Dict:
        """序列化失败消息"""
        return {
            "original_topic": message.original_topic,
            "partition": message.partition,
            "offset": message.offset,
            "key": message.key,
            "value": message.value,
            "exception": {
                "type": message.exception_type,
                "message": message.exception_message
            },
            "retry_count": message.retry_count,
            "failed_at": message.failed_at.isoformat(),
            "last_retry_at": message.last_retry_at.isoformat() if message.last_retry_at else None
        }
    
    def _alert_ops(self, message: FailedMessage) -> None:
        """运维告警"""
        alert = {
            "severity": "HIGH",
            "type": "MESSAGE_DEAD_LETTER",
            "message": f"消息进入死信队列: {message.exception_type}",
            "details": {
                "original_topic": message.original_topic,
                "retry_count": message.retry_count,
                "error": message.exception_message
            },
            "timestamp": datetime.now().isoformat()
        }
        # 实际应发送到PagerDuty、Slack等
        print(f"[ALERT] {json.dumps(alert, indent=2)}")


class RetryableConsumer:
    """
    可重试消费者
    集成重试逻辑与死信队列
    """
    
    def __init__(self, dlq: DeadLetterQueue, max_retries: int = 3):
        self._dlq = dlq
        self._max_retries = max_retries
        self._handlers: Dict[str, Callable] = {}
    
    def register_handler(self, event_type: str, 
                         handler: Callable[[Dict], Any],
                         retry_policy: RetryPolicy = RetryPolicy.EXPONENTIAL_BACKOFF) -> None:
        """注册处理器"""
        self._handlers[event_type] = {
            "handler": handler,
            "policy": retry_policy
        }
    
    def process_with_retry(self, msg) -> bool:
        """
        处理消息(带重试)
        返回True表示成功,False表示最终失败(已入DLQ)
        """
        value = json.loads(msg.value.decode())
        event_type = value.get("event_type")
        
        config = self._handlers.get(event_type)
        if not config:
            print(f"未找到处理器: {event_type}")
            return True  # 跳过
        
        handler = config["handler"]
        policy = config["policy"]
        
        # 构建tenacity重试装饰器
        if policy == RetryPolicy.EXPONENTIAL_BACKOFF:
            @retry(
                stop=stop_after_attempt(self._max_retries),
                wait=wait_exponential(multiplier=1, min=4, max=60),
                retry=retry_if_exception_type(Exception),
                before_sleep=lambda retry_state: print(
                    f"重试 {retry_state.attempt_number}/{self._max_retries}, "
                    f"下次等待 {retry_state.next_action.sleep} 秒"
                )
            )
            def execute():
                return handler(value)
            
            try:
                result = execute()
                return True
            except Exception as e:
                # 最终失败,进入DLQ
                failed_msg = FailedMessage(
                    original_topic=msg.topic,
                    partition=msg.partition,
                    offset=msg.offset,
                    key=msg.key.decode() if msg.key else None,
                    value=value,
                    exception_type=type(e).__name__,
                    exception_message=str(e)
                )
                self._dlq.schedule_retry(failed_msg)
                return False
        else:
            # 其他策略...
            pass


class DLQReplayer:
    """
    DLQ重放工具
    支持人工干预后重新处理
    """
    
    def __init__(self, dlq_topic: str, bootstrap_servers: str):
        self._consumer = KafkaConsumer(
            dlq_topic,
            bootstrap_servers=bootstrap_servers,
            value_deserializer=lambda m: json.loads(m.decode())
        )
        self._producer = KafkaProducer(
            bootstrap_servers=bootstrap_servers,
            value_serializer=lambda v: json.dumps(v).encode()
        )
    
    def list_dead_messages(self, limit: int = 100) -> List[Dict]:
        """列出错死消息"""
        messages = []
        for msg in self._consumer:
            messages.append(msg.value)
            if len(messages) >= limit:
                break
        return messages
    
    def replay_message(self, message: Dict, target_topic: str) -> bool:
        """
        重放消息
        人工修复后重新发送到原始主题
        """
        # 标记为重放
        message["_replayed"] = True
        message["_replayed_at"] = datetime.now().isoformat()
        
        self._producer.send(target_topic, value=message)
        return True
    
    def skip_message(self, message: Dict, reason: str) -> None:
        """跳过消息(记录审计)"""
        # 记录到审计日志
        audit = {
            "action": "SKIP",
            "message": message,
            "reason": reason,
            "timestamp": datetime.now().isoformat()
        }
        print(f"消息已跳过: {audit}")


if __name__ == "__main__":
    dlq = DeadLetterQueue("localhost:9092")
    
    def risky_handler(message: Dict) -> Any:
        if message.get("should_fail"):
            raise Exception("模拟处理失败")
        return {"status": "SUCCESS"}
    
    consumer = RetryableConsumer(dlq)
    consumer.register_handler("RiskyEvent", risky_handler)
    
    # 模拟消息处理
    class FakeMsg:
        def __init__(self, value, topic="test", partition=0, offset=0, key=None):
            self.value = json.dumps(value).encode()
            self.topic = topic
            self.partition = partition
            self.offset = offset
            self.key = key.encode() if key else None
    
    msg = FakeMsg({
        "event_type": "RiskyEvent",
        "should_fail": True,
        "data": "test"
    }, key="msg-001")
    
    result = consumer.process_with_retry(msg)
    print(f"处理结果: {'成功' if result else '已进入DLQ'}")

2.4.1 Outbox模式与CDC

复制代码
#!/usr/bin/env python3
"""
【2.4.1】Outbox模式:事务性发件箱表设计与CDC(Debezium)捕获
内容:实现Outbox表、事务内写入、Debezium变更数据捕获
依赖:sqlalchemy>=2.0, psycopg2-binary, pydantic>=2.0
"""

from __future__ import annotations
import json
import uuid
from typing import Dict, Any, Optional, List
from dataclasses import dataclass, field
from datetime import datetime
from contextlib import contextmanager

from sqlalchemy import create_engine, Column, String, JSON, DateTime, Boolean, event
from sqlalchemy.orm import declarative_base, sessionmaker, Session


Base = declarative_base()


class OutboxRecord(Base):
    """
    Outbox表
    事务性存储待发布事件
    """
    __tablename__ = "outbox"
    
    id = Column(String(36), primary_key=True, default=lambda: str(uuid.uuid4()))
    aggregate_type = Column(String(100), nullable=False, index=True)  # 聚合类型
    aggregate_id = Column(String(36), nullable=False, index=True)     # 聚合ID
    event_type = Column(String(100), nullable=False)                   # 事件类型
    payload = Column(JSON, nullable=False)                             # 事件载荷
    metadata = Column(JSON, default=dict)                              # 元数据
    created_at = Column(DateTime, default=datetime.now, index=True)
    published = Column(Boolean, default=False, index=True)             # 发布状态
    published_at = Column(DateTime, nullable=True)


@dataclass
class DomainEvent:
    """领域事件"""
    event_type: str
    aggregate_type: str
    aggregate_id: str
    payload: Dict[str, Any]
    metadata: Dict[str, Any] = field(default_factory=dict)


class OutboxPublisher:
    """
    Outbox发布器
    将领域事件写入Outbox表(与业务数据同一事务)
    """
    
    def __init__(self, session: Session):
        self._session = session
    
    def publish(self, event: DomainEvent) -> str:
        """
        发布事件到Outbox
        必须在数据库事务内调用
        """
        outbox_record = OutboxRecord(
            aggregate_type=event.aggregate_type,
            aggregate_id=event.aggregate_id,
            event_type=event.event_type,
            payload=event.payload,
            metadata={
                **event.metadata,
                "trace_id": str(uuid.uuid4()),
                "timestamp": datetime.now().isoformat()
            }
        )
        
        self._session.add(outbox_record)
        return outbox_record.id


class TransactionalOutboxUnitOfWork:
    """
    事务性工作单元(Outbox模式)
    确保业务数据与Outbox记录原子性写入
    """
    
    def __init__(self, db_url: str):
        self._engine = create_engine(db_url)
        Base.metadata.create_all(self._engine)
        self._session_factory = sessionmaker(bind=self._engine)
    
    @contextmanager
    def transaction(self):
        """
        事务上下文
        自动处理Outbox写入
        """
        session = self._session_factory()
        outbox_publisher = OutboxPublisher(session)
        
        try:
            yield session, outbox_publisher
            session.commit()
        except Exception:
            session.rollback()
            raise
        finally:
            session.close()


class DebeziumChangeEvent:
    """
    Debezium变更事件
    解析CDC捕获的变更数据
    """
    
    def __init__(self, debezium_payload: Dict):
        self._payload = debezium_payload
    
    @property
    def op(self) -> str:
        """操作类型 c=create, u=update, d=delete"""
        return self._payload.get("op")
    
    @property
    def before(self) -> Optional[Dict]:
        """变更前数据"""
        return self._payload.get("before")
    
    @property
    def after(self) -> Optional[Dict]:
        """变更后数据"""
        return self._payload.get("after")
    
    @property
    def source(self) -> Dict:
        """元数据源"""
        return self._payload.get("source", {})
    
    def is_outbox_event(self) -> bool:
        """判断是否为Outbox表变更"""
        table = self.source.get("table")
        return table == "outbox"
    
    def to_domain_event(self) -> Optional[DomainEvent]:
        """转换为领域事件"""
        if not self.is_outbox_event() or self.op != "c":
            return None
        
        after = self.after
        if not after or after.get("published"):
            return None
        
        return DomainEvent(
            event_type=after["event_type"],
            aggregate_type=after["aggregate_type"],
            aggregate_id=after["aggregate_id"],
            payload=after["payload"],
            metadata=after.get("metadata", {})
        )


class OutboxRelay:
    """
    Outbox中继器
    轮询Outbox表并将未发布事件发送到消息代理
    (模拟Debezium行为,实际生产使用Debezium Connector)
    """
    
    def __init__(self, db_url: str, message_publisher: Any):
        self._engine = create_engine(db_url)
        self._Session = sessionmaker(bind=self._engine)
        self._publisher = message_publisher
        self._running = False
    
    def start(self, poll_interval: int = 5) -> None:
        """启动中继循环"""
        self._running = True
        
        while self._running:
            session = self._Session()
            try:
                # 查询未发布事件
                pending = session.query(OutboxRecord).filter_by(
                    published=False
                ).order_by(OutboxRecord.created_at).limit(100).all()
                
                for record in pending:
                    # 发布到消息代理
                    success = self._publish_event(record)
                    
                    if success:
                        # 标记为已发布
                        record.published = True
                        record.published_at = datetime.now()
                        session.commit()
                        
            except Exception as e:
                print(f"中继错误: {e}")
                session.rollback()
            finally:
                session.close()
            
            import time
            time.sleep(poll_interval)
    
    def _publish_event(self, record: OutboxRecord) -> bool:
        """发布事件到消息代理"""
        event = {
            "event_type": record.event_type,
            "aggregate_type": record.aggregate_type,
            "aggregate_id": record.aggregate_id,
            "payload": record.payload,
            "metadata": record.metadata,
            "outbox_id": record.id
        }
        
        # 实际应发送到Kafka
        print(f"[Relay] 发布事件: {event['event_type']}")
        return True
    
    def stop(self) -> None:
        self._running = False


# Debezium连接器配置(JSON)
DEBEZIUM_CONNECTOR_CONFIG = {
    "name": "order-outbox-connector",
    "config": {
        "connector.class": "io.debezium.connector.postgresql.PostgresConnector",
        "database.hostname": "postgres",
        "database.port": "5432",
        "database.user": "debezium",
        "database.password": "dbz",
        "database.dbname": "order_db",
        "database.server.name": "order-db-server",
        "table.include.list": "public.outbox",
        "transforms": "outbox",
        "transforms.outbox.type": "io.debezium.transforms.outbox.EventRouter",
        "transforms.outbox.table.field.event.id": "id",
        "transforms.outbox.table.field.event.key": "aggregate_id",
        "transforms.outbox.table.field.event.type": "event_type",
        "transforms.outbox.table.field.event.payload": "payload",
        "transforms.outbox.route.by.name": "outbox.event.${r.source}",
        "poll.interval.ms": "1000"
    }
}


if __name__ == "__main__":
    # 示例:使用Outbox模式
    uow = TransactionalOutboxUnitOfWork("postgresql://user:pass@localhost/order_db")
    
    with uow.transaction() as session, outbox:
        # 业务操作
        order = Order(id="ORD-001", customer_id="CUST-001", status="CREATED")
        session.add(order)
        
        # 同时写入Outbox(同一事务)
        event_id = outbox.publish(DomainEvent(
            event_type="OrderCreated",
            aggregate_type="Order",
            aggregate_id="ORD-001",
            payload={"customer_id": "CUST-001", "total": 199.99}
        ))
        
        print(f"事件已写入Outbox: {event_id}")
    
    # 启动中继(实际应由Debezium替代)
    relay = OutboxRelay("postgresql://user:pass@localhost/order_db", None)
    # relay.start()

2.4.2 读取模型最终一致

复制代码
#!/usr/bin/env python3
"""
【2.4.2】读取模型最终一致:订单查询服务物化视图异步构建
内容:实现投影处理器、物化视图构建、查询优化
依赖:sqlalchemy>=2.0, elasticsearch>=8.0, redis>=4.5
"""

from __future__ import annotations
import json
from typing import Dict, Any, List, Optional
from dataclasses import dataclass
from datetime import datetime
from abc import ABC, abstractmethod

from sqlalchemy import create_engine, Column, String, Float, Integer, DateTime, JSON, Index
from sqlalchemy.orm import declarative_base, sessionmaker
from elasticsearch import Elasticsearch


Base = declarative_base()


class OrderMaterializedView(Base):
    """
    订单物化视图
    反规范化设计,优化查询性能
    """
    __tablename__ = "mv_orders"
    
    order_id = Column(String(36), primary_key=True)
    customer_id = Column(String(36), index=True)
    customer_name = Column(String(100))  # 反规范化:冗余客户名
    total_amount = Column(Float)
    discount_amount = Column(Float, default=0)
    final_amount = Column(Float)
    status = Column(String(20), index=True)
    item_count = Column(Integer)  # 预计算
    items_snapshot = Column(JSON)  # 订单项快照
    shipping_address = Column(String(500))
    payment_method = Column(String(50))
    created_at = Column(DateTime, index=True)
    updated_at = Column(DateTime, index=True)
    last_event_id = Column(String(36))  # 用于幂等性
    
    # 复合索引
    __table_args__ = (
        Index('idx_customer_status', 'customer_id', 'status'),
        Index('idx_created_status', 'created_at', 'status'),
    )


class ProjectionHandler(ABC):
    """投影处理器接口"""
    
    @abstractmethod
    def project(self, event: Dict[str, Any]) -> None:
        """处理事件并更新视图"""
        raise NotImplementedError
    
    @abstractmethod
    def can_handle(self, event_type: str) -> bool:
        raise NotImplementedError


class OrderProjectionHandler(ProjectionHandler):
    """
    订单投影处理器
    将领域事件投影到物化视图
    """
    
    def __init__(self, db_url: str):
        self._engine = create_engine(db_url)
        Base.metadata.create_all(self._engine)
        self._Session = sessionmaker(bind=self._engine)
    
    def can_handle(self, event_type: str) -> bool:
        return event_type.startswith("Order")
    
    def project(self, event: Dict[str, Any]) -> None:
        event_type = event.get("event_type")
        handler = getattr(self, f"_on_{event_type.lower()}", None)
        
        if handler:
            handler(event)
    
    def _on_ordercreated(self, event: Dict) -> None:
        """处理订单创建事件"""
        session = self._Session()
        try:
            payload = event["payload"]
            
            # 检查幂等性
            existing = session.query(OrderMaterializedView).get(payload["order_id"])
            if existing and existing.last_event_id == event.get("event_id"):
                return  # 已处理
            
            mv = OrderMaterializedView(
                order_id=payload["order_id"],
                customer_id=payload["customer_id"],
                customer_name=payload.get("customer_name", "Unknown"),  # 反规范化
                total_amount=payload["total"],
                discount_amount=payload.get("discount", 0),
                final_amount=payload["total"] - payload.get("discount", 0),
                status="CREATED",
                item_count=len(payload.get("items", [])),
                items_snapshot=payload.get("items", []),
                shipping_address=payload.get("shipping_address", ""),
                payment_method=payload.get("payment_method", ""),
                created_at=datetime.now(),
                updated_at=datetime.now(),
                last_event_id=event.get("event_id")
            )
            
            session.merge(mv)  # 使用merge实现UPSERT
            session.commit()
            
        finally:
            session.close()
    
    def _on_orderpaid(self, event: Dict) -> None:
        """处理订单支付事件"""
        session = self._Session()
        try:
            payload = event["payload"]
            
            mv = session.query(OrderMaterializedView).get(payload["order_id"])
            if mv:
                mv.status = "PAID"
                mv.updated_at = datetime.now()
                mv.last_event_id = event.get("event_id")
                session.commit()
        finally:
            session.close()


class ElasticsearchProjection:
    """
    Elasticsearch投影
    用于全文搜索和复杂查询
    """
    
    def __init__(self, es_host: str):
        self._es = Elasticsearch([es_host])
        self._index = "orders"
    
    def create_index(self) -> None:
        """创建索引(包含映射)"""
        mapping = {
            "mappings": {
                "properties": {
                    "order_id": {"type": "keyword"},
                    "customer_id": {"type": "keyword"},
                    "customer_name": {
                        "type": "text",
                        "fields": {
                            "keyword": {"type": "keyword"}
                        }
                    },
                    "status": {"type": "keyword"},
                    "total_amount": {"type": "float"},
                    "items": {
                        "type": "nested",
                        "properties": {
                            "sku": {"type": "keyword"},
                            "name": {"type": "text"},
                            "quantity": {"type": "integer"}
                        }
                    },
                    "created_at": {"type": "date"}
                }
            }
        }
        
        if not self._es.indices.exists(index=self._index):
            self._es.indices.create(index=self._index, body=mapping)
    
    def project(self, event: Dict[str, Any]) -> None:
        """投影事件到ES"""
        event_type = event.get("event_type")
        
        if event_type == "OrderCreated":
            self._es.index(
                index=self._index,
                id=event["payload"]["order_id"],
                body={
                    "order_id": event["payload"]["order_id"],
                    "customer_id": event["payload"]["customer_id"],
                    "customer_name": event["payload"].get("customer_name", ""),
                    "status": "CREATED",
                    "total_amount": event["payload"]["total"],
                    "items": event["payload"].get("items", []),
                    "created_at": datetime.now().isoformat()
                }
            )


class ReadModelQueryService:
    """
    查询服务
    提供优化的读取接口
    """
    
    def __init__(self, db_url: str, redis_client=None):
        self._engine = create_engine(db_url)
        self._Session = sessionmaker(bind=self._engine)
        self._redis = redis_client
    
    def get_order_by_id(self, order_id: str, use_cache: bool = True) -> Optional[Dict]:
        """根据ID查询订单(点查优化)"""
        cache_key = f"order:{order_id}"
        
        # 缓存检查
        if use_cache and self._redis:
            cached = self._redis.get(cache_key)
            if cached:
                return json.loads(cached)
        
        session = self._Session()
        try:
            mv = session.query(OrderMaterializedView).get(order_id)
            if not mv:
                return None
            
            result = {
                "order_id": mv.order_id,
                "customer_id": mv.customer_id,
                "customer_name": mv.customer_name,
                "total_amount": mv.total_amount,
                "status": mv.status,
                "item_count": mv.item_count,
                "items": mv.items_snapshot
            }
            
            # 写入缓存(5分钟)
            if use_cache and self._redis:
                self._redis.setex(cache_key, 300, json.dumps(result))
            
            return result
        finally:
            session.close()
    
    def list_customer_orders(self, customer_id: str, status: Optional[str] = None,
                            page: int = 1, page_size: int = 20) -> Dict:
        """
        查询客户订单列表
        利用复合索引(customer_id, status)优化
        """
        session = self._Session()
        try:
            query = session.query(OrderMaterializedView).filter_by(customer_id=customer_id)
            
            if status:
                query = query.filter_by(status=status)
            
            total = query.count()
            results = query.order_by(
                OrderMaterializedView.created_at.desc()
            ).offset((page-1)*page_size).limit(page_size).all()
            
            return {
                "orders": [
                    {
                        "order_id": o.order_id,
                        "status": o.status,
                        "total": o.final_amount,
                        "item_count": o.item_count,
                        "created_at": o.created_at.isoformat()
                    }
                    for o in results
                ],
                "total": total,
                "page": page,
                "page_size": page_size
            }
        finally:
            session.close()


if __name__ == "__main__":
    import redis
    
    # 初始化
    handler = OrderProjectionHandler("postgresql://localhost/read_db")
    query_service = ReadModelQueryService("postgresql://localhost/read_db", redis.Redis())
    
    # 模拟处理事件
    event = {
        "event_type": "OrderCreated",
        "event_id": "evt-001",
        "payload": {
            "order_id": "ORD-123",
            "customer_id": "CUST-001",
            "customer_name": "张三",
            "total": 199.99,
            "items": [{"sku": "SKU-001", "name": "商品A", "qty": 2}]
        }
    }
    
    handler.project(event)
    print("事件已投影到物化视图")
    
    # 查询
    order = query_service.get_order_by_id("ORD-123", use_cache=False)
    print(f"查询结果: {order}")

2.4.3 分布式缓存同步

复制代码
#!/usr/bin/env python3
"""
【2.4.3】分布式缓存同步:库存扣减缓存失效广播
内容:实现缓存失效策略、本地缓存一致性、广播同步
依赖:redis>=4.5, cachetools>=5.3
"""

from __future__ import annotations
import json
import threading
import time
from typing import Dict, Any, Optional, Callable, Set
from dataclasses import dataclass
from abc import ABC, abstractmethod

import redis
from cachetools import TTLCache


class CacheInvalidator(ABC):
    """缓存失效器接口"""
    
    @abstractmethod
    def invalidate(self, key: str) -> None:
        raise NotImplementedError
    
    @abstractmethod
    def invalidate_pattern(self, pattern: str) -> None:
        raise NotImplementedError


class RedisCacheInvalidator(CacheInvalidator):
    """
    Redis缓存失效广播
    使用Pub/Sub通知所有节点失效缓存
    """
    
    INVALIDATION_CHANNEL = "cache:invalidation"
    
    def __init__(self, redis_client: redis.Redis):
        self._redis = redis_client
        self._local_handlers: Set[Callable[[str], None]] = set()
        self._listener_thread = None
        self._running = False
    
    def start_listening(self) -> None:
        """启动失效消息监听"""
        self._running = True
        self._listener_thread = threading.Thread(target=self._listen, daemon=True)
        self._listener_thread.start()
    
    def _listen(self) -> None:
        """监听失效消息"""
        pubsub = self._redis.pubsub()
        pubsub.subscribe(self.INVALIDATION_CHANNEL)
        
        for message in pubsub.listen():
            if not self._running:
                break
            
            if message["type"] == "message":
                data = json.loads(message["data"])
                key = data.get("key")
                pattern = data.get("pattern")
                
                # 触发本地缓存失效
                for handler in self._local_handlers:
                    if pattern:
                        handler(pattern)  # 模式失效
                    else:
                        handler(key)
    
    def invalidate(self, key: str) -> None:
        """广播失效消息"""
        message = json.dumps({"key": key, "timestamp": time.time()})
        self._redis.publish(self.INVALIDATION_CHANNEL, message)
    
    def invalidate_pattern(self, pattern: str) -> None:
        """广播模式失效消息"""
        message = json.dumps({"pattern": pattern, "timestamp": time.time()})
        self._redis.publish(self.INVALIDATION_CHANNEL, message)
    
    def register_local_handler(self, handler: Callable[[str], None]) -> None:
        """注册本地缓存失效处理器"""
        self._local_handlers.add(handler)
    
    def stop(self) -> None:
        self._running = False


class LocalCache:
    """
    本地缓存(带分布式同步)
    两级缓存:本地Caffeine-like + Redis分布式
    """
    
    def __init__(self, name: str, ttl: int = 300, maxsize: int = 10000,
                 redis_client: Optional[redis.Redis] = None,
                 invalidator: Optional[RedisCacheInvalidator] = None):
        self._name = name
        self._local = TTLCache(maxsize=maxsize, ttl=ttl)
        self._redis = redis_client
        self._invalidator = invalidator
        
        if invalidator:
            invalidator.register_local_handler(self._handle_remote_invalidation)
    
    def get(self, key: str) -> Optional[Any]:
        """获取缓存"""
        # 先查本地
        if key in self._local:
            return self._local[key]
        
        # 再查Redis
        if self._redis:
            value = self._redis.get(f"{self._name}:{key}")
            if value:
                data = json.loads(value)
                self._local[key] = data  # 回填本地
                return data
        
        return None
    
    def set(self, key: str, value: Any, ttl: Optional[int] = None) -> None:
        """设置缓存"""
        self._local[key] = value
        
        if self._redis:
            serialized = json.dumps(value)
            if ttl:
                self._redis.setex(f"{self._name}:{key}", ttl, serialized)
            else:
                self._redis.set(f"{self._name}:{key}", serialized)
    
    def delete(self, key: str) -> None:
        """删除缓存并广播"""
        self._local.pop(key, None)
        
        if self._redis:
            self._redis.delete(f"{self._name}:{key}")
        
        if self._invalidator:
            self._invalidator.invalidate(f"{self._name}:{key}")
    
    def _handle_remote_invalidation(self, key_or_pattern: str) -> None:
        """处理远程失效通知"""
        if "*" in key_or_pattern:
            # 模式匹配失效
            keys_to_remove = [k for k in self._local.keys() if key_or_pattern.replace("*", "") in k]
            for k in keys_to_remove:
                del self._local[k]
        else:
            self._local.pop(key_or_pattern, None)


class InventoryCacheService:
    """
    库存缓存服务
    处理库存扣减的缓存一致性
    """
    
    def __init__(self, redis_client: redis.Redis, local_cache: LocalCache):
        self._redis = redis_client
        self._local = local_cache
    
    def get_inventory(self, sku: str) -> Dict[str, int]:
        """
        获取库存
        使用本地缓存加速读取
        """
        cached = self._local.get(f"inv:{sku}")
        if cached:
            return cached
        
        # 从Redis获取
        data = self._redis.hgetall(f"inventory:{sku}")
        if data:
            result = {
                "total": int(data.get(b"total", 0)),
                "reserved": int(data.get(b"reserved", 0)),
                "available": int(data.get(b"total", 0)) - int(data.get(b"reserved", 0))
            }
            self._local.set(f"inv:{sku}", result, ttl=60)
            return result
        
        return {"total": 0, "reserved": 0, "available": 0}
    
    def reserve_inventory(self, sku: str, quantity: int) -> bool:
        """
        预留库存
        使用Redis原子操作保证一致性
        """
        # Lua脚本保证原子性
        script = """
        local available = redis.call('hget', KEYS[1], 'available') or 0
        available = tonumber(available)
        
        if available >= tonumber(ARGV[1]) then
            redis.call('hincrby', KEYS[1], 'reserved', ARGV[1])
            redis.call('hincrby', KEYS[1], 'available', -ARGV[1])
            return 1
        else
            return 0
        end
        """
        
        result = self._redis.eval(script, 1, f"inventory:{sku}", quantity)
        
        if result == 1:
            # 使缓存失效(广播通知其他节点)
            self._local.delete(f"inv:{sku}")
            return True
        return False
    
    def release_reservation(self, sku: str, quantity: int) -> None:
        """释放预留(补偿操作)"""
        self._redis.hincrby(f"inventory:{sku}", "reserved", -quantity)
        self._redis.hincrby(f"inventory:{sku}", "available", quantity)
        
        # 广播失效
        self._local.delete(f"inv:{sku}")


if __name__ == "__main__":
    redis_client = redis.Redis()
    invalidator = RedisCacheInvalidator(redis_client)
    invalidator.start_listening()
    
    cache = LocalCache("inventory", redis_client=redis_client, invalidator=invalidator)
    inventory_service = InventoryCacheService(redis_client, cache)
    
    # 模拟库存操作
    inventory_service.reserve_inventory("SKU-001", 10)
    print("库存预留成功,缓存已失效广播")
    
    time.sleep(1)
    invalidator.stop()

2.4.4 一致性校验

复制代码
#!/usr/bin/env python3
"""
【2.4.4】一致性校验:对账服务定时扫描状态不一致订单
内容:实现对账算法、差异检测、自动修复、告警通知
依赖:sqlalchemy>=2.0, pandas>=2.0, apscheduler>=3.10
"""

from __future__ import annotations
import hashlib
from typing import Dict, List, Optional, Tuple, Set
from dataclasses import dataclass
from datetime import datetime, timedelta
from enum import Enum
from abc import ABC, abstractmethod

import pandas as pd
from sqlalchemy import create_engine, text
from apscheduler.schedulers.background import BackgroundScheduler


class ReconciliationStatus(Enum):
    """对账状态"""
    MATCHED = "MATCHED"           # 一致
    MISMATCH = "MISMATCH"         # 不一致
    MISSING_LEFT = "MISSING_LEFT"  # 左侧缺失
    MISSING_RIGHT = "MISSING_RIGHT"  # 右侧缺失
    PENDING_REPAIR = "PENDING_REPAIR"  # 待修复


@dataclass
class ReconciliationDifference:
    """对账差异记录"""
    id: str
    order_id: str
    left_status: Optional[str]
    right_status: Optional[str]
    left_amount: Optional[float]
    right_amount: Optional[float]
    difference_type: ReconciliationStatus
    detected_at: datetime
    auto_repaired: bool = False
    repair_action: Optional[str] = None


class DataSource(ABC):
    """数据源抽象"""
    
    @abstractmethod
    def fetch_records(self, start_time: datetime, end_time: datetime) -> pd.DataFrame:
        """获取记录"""
        raise NotImplementedError
    
    @abstractmethod
    def get_record_by_id(self, order_id: str) -> Optional[Dict]:
        raise NotImplementedError


class OrderServiceDataSource(DataSource):
    """订单服务数据源"""
    
    def __init__(self, db_url: str):
        self._engine = create_engine(db_url)
    
    def fetch_records(self, start_time: datetime, end_time: datetime) -> pd.DataFrame:
        query = """
        SELECT order_id, status, total_amount, created_at, updated_at
        FROM orders
        WHERE created_at BETWEEN :start AND :end
        """
        return pd.read_sql(text(query), self._engine, 
                          params={"start": start_time, "end": end_time})
    
    def get_record_by_id(self, order_id: str) -> Optional[Dict]:
        with self._engine.connect() as conn:
            result = conn.execute(
                text("SELECT * FROM orders WHERE order_id = :id"),
                {"id": order_id}
            ).fetchone()
            return dict(result._mapping) if result else None


class PaymentServiceDataSource(DataSource):
    """支付服务数据源"""
    
    def __init__(self, db_url: str):
        self._engine = create_engine(db_url)
    
    def fetch_records(self, start_time: datetime, end_time: datetime) -> pd.DataFrame:
        query = """
        SELECT order_id, payment_status, paid_amount, transaction_id, paid_at
        FROM payments
        WHERE created_at BETWEEN :start AND :end
        """
        return pd.read_sql(text(query), self._engine,
                          params={"start": start_time, "end": end_time})
    
    def get_record_by_id(self, order_id: str) -> Optional[Dict]:
        with self._engine.connect() as conn:
            result = conn.execute(
                text("SELECT * FROM payments WHERE order_id = :id"),
                {"id": order_id}
            ).fetchone()
            return dict(result._mapping) if result else None


class ReconciliationEngine:
    """
    对账引擎
    实现双边对账算法
    """
    
    def __init__(self):
        self._differences: List[ReconciliationDifference] = []
    
    def reconcile(self, left_df: pd.DataFrame, right_df: pd.DataFrame,
                  left_key: str = "order_id", right_key: str = "order_id") -> pd.DataFrame:
        """
        执行对账
        基于订单ID外连接,对比关键字段
        """
        # 合并数据集
        merged = pd.merge(
            left_df, right_df,
            left_on=left_key,
            right_on=right_key,
            how="outer",
            indicator=True,
            suffixes=("_left", "_right")
        )
        
        results = []
        
        for _, row in merged.iterrows():
            order_id = row[left_key] if pd.notna(row[left_key]) else row[right_key]
            
            if row["_merge"] == "left_only":
                # 仅存在于左侧
                diff = ReconciliationDifference(
                    id=self._generate_id(order_id),
                    order_id=order_id,
                    left_status=row.get("status_left"),
                    right_status=None,
                    left_amount=row.get("total_amount"),
                    right_amount=None,
                    difference_type=ReconciliationStatus.MISSING_RIGHT,
                    detected_at=datetime.now()
                )
                results.append(diff)
                
            elif row["_merge"] == "right_only":
                # 仅存在于右侧
                diff = ReconciliationDifference(
                    id=self._generate_id(order_id),
                    order_id=order_id,
                    left_status=None,
                    right_status=row.get("payment_status"),
                    left_amount=None,
                    right_amount=row.get("paid_amount"),
                    difference_type=ReconciliationStatus.MISSING_LEFT,
                    detected_at=datetime.now()
                )
                results.append(diff)
                
            else:
                # 双边存在,检查字段一致性
                amount_match = abs(row.get("total_amount", 0) - row.get("paid_amount", 0)) < 0.01
                status_consistent = self._check_status_consistency(
                    row.get("status_left"), row.get("payment_status")
                )
                
                if not amount_match or not status_consistent:
                    diff = ReconciliationDifference(
                        id=self._generate_id(order_id),
                        order_id=order_id,
                        left_status=row.get("status_left"),
                        right_status=row.get("payment_status"),
                        left_amount=row.get("total_amount"),
                        right_amount=row.get("paid_amount"),
                        difference_type=ReconciliationStatus.MISMATCH,
                        detected_at=datetime.now()
                    )
                    results.append(diff)
        
        self._differences.extend(results)
        return pd.DataFrame([vars(d) for d in results])
    
    def _generate_id(self, order_id: str) -> str:
        """生成差异记录ID"""
        return hashlib.md5(f"{order_id}:{datetime.now().isoformat()}".encode()).hexdigest()[:16]
    
    def _check_status_consistency(self, order_status: str, payment_status: str) -> bool:
        """检查状态一致性"""
        # 状态映射表
        status_map = {
            "PAID": "SUCCESS",
            "PENDING": "PENDING",
            "CANCELLED": "REFUNDED"
        }
        expected_payment = status_map.get(order_status)
        return expected_payment == payment_status if expected_payment else True


class AutoRepairStrategy(ABC):
    """自动修复策略"""
    
    @abstractmethod
    def can_repair(self, difference: ReconciliationDifference) -> bool:
        raise NotImplementedError
    
    @abstractmethod
    def repair(self, difference: ReconciliationDifference) -> bool:
        raise NotImplementedError


class PaymentStatusSyncStrategy(AutoRepairStrategy):
    """
    支付状态同步策略
    当订单为PAID但支付记录缺失时,发起查询
    """
    
    def can_repair(self, difference: ReconciliationDifference) -> bool:
        return (
            difference.difference_type == ReconciliationStatus.MISSING_RIGHT and
            difference.left_status == "PAID"
        )
    
    def repair(self, difference: ReconciliationDifference) -> bool:
        # 调用支付网关查询实际状态
        print(f"查询支付网关状态: {difference.order_id}")
        # 如果实际已支付,修复支付记录
        return True


class ReconciliationService:
    """
    对账服务
    定时执行对账与修复
    """
    
    def __init__(self, order_source: DataSource, payment_source: DataSource,
                 engine: ReconciliationEngine):
        self._order_source = order_source
        self._payment_source = payment_source
        self._engine = engine
        self._repair_strategies: List[AutoRepairStrategy] = []
        self._scheduler = BackgroundScheduler()
    
    def add_repair_strategy(self, strategy: AutoRepairStrategy) -> None:
        """添加修复策略"""
        self._repair_strategies.append(strategy)
    
    def run_reconciliation(self, check_window_hours: int = 24) -> pd.DataFrame:
        """
        执行对账
        检查最近N小时的订单
        """
        end_time = datetime.now()
        start_time = end_time - timedelta(hours=check_window_hours)
        
        # 获取双方数据
        order_data = self._order_source.fetch_records(start_time, end_time)
        payment_data = self._payment_source.fetch_records(start_time, end_time)
        
        print(f"获取订单记录: {len(order_data)} 条")
        print(f"获取支付记录: {len(payment_data)} 条")
        
        # 执行对账
        differences = self._engine.reconcile(order_data, payment_data)
        
        print(f"发现差异: {len(differences)} 条")
        
        # 尝试自动修复
        for _, diff_row in differences.iterrows():
            diff = ReconciliationDifference(**diff_row.to_dict())
            self._attempt_repair(diff)
        
        return differences
    
    def _attempt_repair(self, difference: ReconciliationDifference) -> None:
        """尝试自动修复"""
        for strategy in self._repair_strategies:
            if strategy.can_repair(difference):
                success = strategy.repair(difference)
                if success:
                    difference.auto_repaired = True
                    difference.repair_action = type(strategy).__name__
                    print(f"自动修复成功: {difference.order_id}")
                break
    
    def start_scheduler(self, interval_minutes: int = 30) -> None:
        """启动定时对账"""
        self._scheduler.add_job(
            self.run_reconciliation,
            'interval',
            minutes=interval_minutes,
            id='reconciliation_job'
        )
        self._scheduler.start()
    
    def generate_report(self, output_path: str) -> None:
        """生成对账报告"""
        df = pd.DataFrame([vars(d) for d in self._engine._differences])
        df.to_csv(output_path, index=False)
        print(f"报告已生成: {output_path}")


if __name__ == "__main__":
    # 初始化
    order_db = OrderServiceDataSource("postgresql://localhost/order_db")
    payment_db = PaymentServiceDataSource("postgresql://localhost/payment_db")
    engine = ReconciliationEngine()
    
    service = ReconciliationService(order_db, payment_db, engine)
    service.add_repair_strategy(PaymentStatusSyncStrategy())
    
    # 执行对账
    diffs = service.run_reconciliation(check_window_hours=24)
    print(f"对账完成,发现 {len(diffs)} 条差异")

2.5.1 分布式追踪

Python

复制

复制代码
#!/usr/bin/env python3
"""
【2.5.1】分布式追踪:OpenTelemetry跨服务Trace上下文传递(Jaeger集成)
内容:实现Trace传播、Span创建、 baggage传递、Jaeger导出
依赖:opentelemetry-api>=1.20, opentelemetry-sdk>=1.20, opentelemetry-instrumentation-fastapi>=0.41
"""

from __future__ import annotations
import time
from typing import Dict, Optional, Callable
from contextlib import contextmanager
from functools import wraps

from opentelemetry import trace
from opentelemetry.exporter.jaeger.thrift import JaegerExporter
from opentelemetry.sdk.trace import TracerProvider
from opentelemetry.sdk.trace.export import BatchSpanProcessor
from opentelemetry.sdk.resources import Resource, SERVICE_NAME
from opentelemetry.propagate import extract, inject, set_global_textmap
from opentelemetry.propagators.b3 import B3Format
from opentelemetry.trace import Status, StatusCode, SpanKind

from fastapi import FastAPI, Request
from fastapi.responses import JSONResponse


# 配置Tracer
resource = Resource(attributes={SERVICE_NAME: "order-service"})
provider = TracerProvider(resource=resource)

jaeger_exporter = JaegerExporter(
    agent_host_name="localhost",
    agent_port=6831,
)

provider.add_span_processor(BatchSpanProcessor(jaeger_exporter))
trace.set_tracer_provider(provider)
tracer = trace.get_tracer("order-service")

# 设置B3传播(支持Header传播)
set_global_textmap(B3Format())


class TracingMiddleware:
    """
    FastAPI追踪中间件
    自动创建Span并传播上下文
    """
    
    def __init__(self, app: FastAPI):
        self.app = app
    
    async def __call__(self, scope, receive, send):
        if scope["type"] != "http":
            await self.app(scope, receive, send)
            return
        
        request = Request(scope, receive)
        
        # 从请求中提取父上下文
        context = extract(request.headers)
        
        # 创建Span
        with tracer.start_as_current_span(
            name=f"{request.method} {request.url.path}",
            context=context,
            kind=SpanKind.SERVER
        ) as span:
            # 添加属性
            span.set_attribute("http.method", request.method)
            span.set_attribute("http.url", str(request.url))
            span.set_attribute("http.target", request.url.path)
            span.set_attribute("http.host", request.headers.get("host"))
            span.set_attribute("http.user_agent", request.headers.get("user-agent"))
            
            # 传播上下文到下游
            headers = {}
            inject(headers)
            
            try:
                await self.app(scope, receive, send)
                span.set_status(Status(StatusCode.OK))
            except Exception as e:
                span.set_status(Status(StatusCode.ERROR, str(e)))
                span.record_exception(e)
                raise


class TracedClient:
    """
    带追踪的HTTP客户端
    自动创建CLIENT Span并传播上下文
    """
    
    def __init__(self, service_name: str):
        self.service_name = service_name
        self.tracer = trace.get_tracer(service_name)
    
    @contextmanager
    def trace_call(self, method: str, url: str, headers: Dict):
        """追踪HTTP调用"""
        with self.tracer.start_as_current_span(
            name=f"HTTP {method}",
            kind=SpanKind.CLIENT
        ) as span:
            span.set_attribute("http.method", method)
            span.set_attribute("http.url", url)
            span.set_attribute("peer.service", self.service_name)
            
            # 注入追踪上下文到Header
            inject(headers)
            
            try:
                yield span
                span.set_status(Status(StatusCode.OK))
            except Exception as e:
                span.set_status(Status(StatusCode.ERROR, str(e)))
                span.record_exception(e)
                raise
    
    def get(self, url: str, headers: Optional[Dict] = None) -> Dict:
        """GET请求(带追踪)"""
        headers = headers or {}
        with self.trace_call("GET", url, headers):
            # 实际HTTP调用
            import requests
            response = requests.get(url, headers=headers)
            return response.json()


def traced_function(span_name: Optional[str] = None):
    """
    函数追踪装饰器
    自动创建Span包装函数调用
    """
    def decorator(func: Callable):
        @wraps(func)
        def wrapper(*args, **kwargs):
            name = span_name or func.__name__
            
            with tracer.start_as_current_span(name) as span:
                span.set_attribute("function.name", func.__name__)
                span.set_attribute("function.args_count", len(args))
                
                # 记录关键参数(脱敏)
                safe_kwargs = {k: v for k, v in kwargs.items() if k not in ["password", "token"]}
                span.set_attribute("function.kwargs", str(safe_kwargs))
                
                try:
                    result = func(*args, **kwargs)
                    span.set_attribute("function.result_type", type(result).__name__)
                    return result
                except Exception as e:
                    span.record_exception(e)
                    raise
        return wrapper
    return decorator


class SagaTracing:
    """
    Saga分布式追踪
    追踪跨服务的Saga流程
    """
    
    def __init__(self):
        self.tracer = trace.get_tracer("saga-orchestrator")
    
    def trace_saga_step(self, saga_id: str, step_name: str, service: str):
        """追踪Saga步骤"""
        def decorator(func: Callable):
            @wraps(func)
            def wrapper(*args, **kwargs):
                with self.tracer.start_as_current_span(
                    name=f"SagaStep: {step_name}",
                    kind=SpanKind.PRODUCER
                ) as span:
                    span.set_attribute("saga.id", saga_id)
                    span.set_attribute("saga.step", step_name)
                    span.set_attribute("saga.target_service", service)
                    
                    # 创建子Span表示远程调用
                    with self.tracer.start_as_current_span(
                        name=f"Call {service}",
                        kind=SpanKind.CLIENT
                    ) as child_span:
                        child_span.set_attribute("rpc.service", service)
                        child_span.set_attribute("rpc.method", step_name)
                        
                        return func(*args, **kwargs)
            return wrapper
        return decorator


# FastAPI集成示例
app = FastAPI()

@app.middleware("http")
async def tracing_middleware(request: Request, call_next):
    """追踪中间件"""
    context = extract(request.headers)
    
    with tracer.start_as_current_span(
        f"{request.method} {request.url.path}",
        context=context,
        kind=SpanKind.SERVER
    ) as span:
        response = await call_next(request)
        span.set_attribute("http.status_code", response.status_code)
        return response


@app.post("/orders")
@traced_function("create_order")
def create_order(order_data: Dict):
    """创建订单(自动追踪)"""
    # 模拟数据库操作
    with tracer.start_as_current_span("db.insert") as span:
        span.set_attribute("db.system", "postgresql")
        span.set_attribute("db.operation", "INSERT")
        time.sleep(0.1)
    
    # 调用库存服务
    client = TracedClient("inventory-service")
    client.get("http://inventory-service:8000/inventory/SKU-001")
    
    return {"order_id": "ORD-123"}


if __name__ == "__main__":
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

2.5.2 结构化日志

Python

复制代码
#!/usr/bin/env python3
"""
【2.5.2】结构化日志:JSON格式统一与ELK Stack聚合分析
内容:实现JSON日志格式化、字段标准化、上下文传递、ELK集成
依赖:structlog>=23.0, python-json-logger>=2.0
"""

from __future__ import annotations
import json
import logging
import sys
import uuid
from typing import Dict, Any, Optional
from contextvars import ContextVar
from datetime import datetime
from functools import wraps

import structlog
from pythonjsonlogger import jsonlogger


# 上下文变量(存储Trace ID等)
context_data: ContextVar[Dict[str, Any]] = ContextVar("log_context", default={})


class ELKJsonFormatter(jsonlogger.JsonFormatter):
    """
    ELK兼容的JSON格式化器
    符合Elastic Common Schema (ECS)
    """
    
    def add_fields(self, log_record: Dict[str, Any], record: logging.LogRecord, 
                   message_dict: Dict[str, Any]) -> None:
        super(ELKJsonFormatter, self).add_fields(log_record, record, message_dict)
        
        # ECS字段映射
        log_record["@timestamp"] = datetime.utcnow().isoformat()
        log_record["log.level"] = record.levelname
        log_record["log.logger"] = record.name
        log_record["message"] = record.getMessage()
        
        # 服务信息
        log_record["service.name"] = "order-service"
        log_record["service.version"] = "1.0.0"
        log_record["service.environment"] = "production"
        
        # 添加上下文
        ctx = context_data.get()
        log_record.update(ctx)
        
        # 错误信息
        if record.exc_info:
            log_record["error.type"] = record.exc_info[0].__name__
            log_record["error.message"] = str(record.exc_info[1])
            log_record["error.stack_trace"] = self.formatException(record.exc_info)


def setup_logging(log_level: str = "INFO"):
    """配置结构化日志"""
    logHandler = logging.StreamHandler(sys.stdout)
    formatter = ELKJsonFormatter(
        "%(timestamp)s %(level)s %(name)s %(message)s"
    )
    logHandler.setFormatter(formatter)
    
    root_logger = logging.getLogger()
    root_logger.addHandler(logHandler)
    root_logger.setLevel(log_level)
    
    # 配置structlog
    structlog.configure(
        processors=[
            structlog.stdlib.filter_by_level,
            structlog.stdlib.add_logger_name,
            structlog.stdlib.add_log_level,
            structlog.stdlib.PositionalArgumentsFormatter(),
            structlog.processors.TimeStamper(fmt="iso"),
            structlog.processors.StackInfoRenderer(),
            structlog.processors.format_exc_info,
            structlog.processors.UnicodeDecoder(),
            structlog.processors.JSONRenderer()
        ],
        context_class=dict,
        logger_factory=structlog.stdlib.LoggerFactory(),
        wrapper_class=structlog.stdlib.BoundLogger,
        cache_logger_on_first_use=True,
    )


class LoggerMixin:
    """日志混入类"""
    
    def __init__(self):
        self.logger = structlog.get_logger(self.__class__.__name__)


class ContextualLogger:
    """
    上下文日志器
    自动注入Trace ID、Span ID等上下文
    """
    
    def __init__(self, logger: structlog.stdlib.BoundLogger):
        self._logger = logger
    
    def bind(self, **kwargs) -> "ContextualLogger":
        """绑定上下文"""
        return ContextualLogger(self._logger.bind(**kwargs))
    
    def info(self, event: str, **kwargs):
        self._logger.info(event, **kwargs)
    
    def error(self, event: str, **kwargs):
        self._logger.error(event, **kwargs)
    
    def debug(self, event: str, **kwargs):
        self._logger.debug(event, **kwargs)
    
    def warning(self, event: str, **kwargs):
        self._logger.warning(event, **kwargs)


def with_log_context(**context_vars):
    """
    上下文装饰器
    为函数调用添加日志上下文
    """
    def decorator(func):
        @wraps(func)
        def wrapper(*args, **kwargs):
            token = context_data.set({
                **context_data.get(),
                **context_vars,
                "function": func.__name__
            })
            try:
                result = func(*args, **kwargs)
                return result
            finally:
                context_data.reset(token)
        return wrapper
    return decorator


class RequestIdFilter(logging.Filter):
    """请求ID过滤器"""
    
    def filter(self, record: logging.LogRecord) -> bool:
        ctx = context_data.get()
        record.request_id = ctx.get("request_id", "unknown")
        record.trace_id = ctx.get("trace_id", "unknown")
        return True


# 业务日志示例
class OrderServiceLogger(LoggerMixin):
    """订单服务专用日志器"""
    
    def log_order_created(self, order_id: str, customer_id: str, amount: float):
        self.logger.info(
            "order_created",
            order_id=order_id,
            customer_id=customer_id,
            amount=amount,
            event_type="business"
        )
    
    def log_payment_failed(self, order_id: str, reason: str, error_code: str):
        self.logger.error(
            "payment_failed",
            order_id=order_id,
            failure_reason=reason,
            error_code=error_code,
            event_type="error"
        )
    
    def log_saga_compensation(self, saga_id: str, step: str, reason: str):
        self.logger.warning(
            "saga_compensation_triggered",
            saga_id=saga_id,
            failed_step=step,
            compensation_reason=reason,
            event_type="saga"
        )


# FastAPI中间件集成
from fastapi import Request
from starlette.middleware.base import BaseHTTPMiddleware

class LoggingMiddleware(BaseHTTPMiddleware):
    """日志中间件"""
    
    async def dispatch(self, request: Request, call_next):
        request_id = str(uuid.uuid4())
        
        # 设置上下文
        token = context_data.set({
            "request_id": request_id,
            "trace_id": request.headers.get("x-trace-id", request_id),
            "client_ip": request.client.host,
            "user_agent": request.headers.get("user-agent")
        })
        
        logger = structlog.get_logger()
        logger.info("request_started", path=request.url.path, method=request.method)
        
        try:
            response = await call_next(request)
            logger.info(
                "request_completed",
                status_code=response.status_code,
                duration_ms=0  # 实际应计算
            )
            return response
        except Exception as e:
            logger.error("request_failed", error=str(e), error_type=type(e).__name__)
            raise
        finally:
            context_data.reset(token)


if __name__ == "__main__":
    setup_logging("INFO")
    
    logger = OrderServiceLogger()
    logger.log_order_created("ORD-001", "CUST-001", 199.99)
    logger.log_payment_failed("ORD-001", "Insufficient funds", "PAY-001")

2.5.3 健康检查端点

复制代码
#!/usr/bin/env python3
"""
【2.5.3】健康检查端点:Kubernetes Liveness/Readiness探针实现
内容:实现健康检查接口、依赖项检查、渐进式就绪、自定义检查
依赖:fastapi>=0.100, psutil>=5.9, sqlalchemy>=2.0
"""

from __future__ import annotations
import time
from typing import Dict, List, Optional, Callable
from dataclasses import dataclass, field
from enum import Enum
from datetime import datetime

from fastapi import FastAPI, HTTPException, status
from fastapi.responses import JSONResponse
import psutil
import sqlalchemy as sa
from sqlalchemy.exc import SQLAlchemyError


class HealthStatus(Enum):
    """健康状态"""
    HEALTHY = "healthy"
    DEGRADED = "degraded"  # 降级(部分功能不可用)
    UNHEALTHY = "unhealthy"


@dataclass
class HealthCheckResult:
    """健康检查结果"""
    name: str
    status: HealthStatus
    response_time_ms: float
    metadata: Dict = field(default_factory=dict)
    error: Optional[str] = None


class HealthChecker:
    """
    健康检查器
    管理多个检查项
    """
    
    def __init__(self):
        self._checks: Dict[str, Callable[[], HealthCheckResult]] = {}
        self._startup_time = time.time()
        self._ready = False
    
    def add_check(self, name: str, check_func: Callable[[], HealthCheckResult]) -> None:
        """添加检查项"""
        self._checks[name] = check_func
    
    def mark_ready(self) -> None:
        """标记服务就绪(启动完成后调用)"""
        self._ready = True
    
    async def check_liveness(self) -> Dict:
        """
        Liveness探针
        检查进程是否存活(简单检查)
        """
        # 检查内存使用
        memory = psutil.virtual_memory()
        if memory.percent > 95:
            raise HTTPException(
                status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
                detail="Memory critical"
            )
        
        return {
            "status": "alive",
            "pid": psutil.Process().pid,
            "uptime_seconds": time.time() - self._startup_time
        }
    
    async def check_readiness(self) -> Dict:
        """
        Readiness探针
        检查服务是否可接收流量
        """
        if not self._ready:
            raise HTTPException(
                status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
                detail="Service not ready"
            )
        
        results: List[HealthCheckResult] = []
        overall_status = HealthStatus.HEALTHY
        
        for name, check_func in self._checks.items():
            start = time.time()
            try:
                result = check_func()
                result.response_time_ms = (time.time() - start) * 1000
                results.append(result)
                
                if result.status == HealthStatus.UNHEALTHY:
                    overall_status = HealthStatus.UNHEALTHY
                elif result.status == HealthStatus.DEGRADED and overall_status == HealthStatus.HEALTHY:
                    overall_status = HealthStatus.DEGRADED
                    
            except Exception as e:
                results.append(HealthCheckResult(
                    name=name,
                    status=HealthStatus.UNHEALTHY,
                    response_time_ms=(time.time() - start) * 1000,
                    error=str(e)
                ))
                overall_status = HealthStatus.UNHEALTHY
        
        response = {
            "status": overall_status.value,
            "checks": [
                {
                    "name": r.name,
                    "status": r.status.value,
                    "response_time_ms": round(r.response_time_ms, 2),
                    "metadata": r.metadata,
                    "error": r.error
                }
                for r in results
            ],
            "timestamp": datetime.now().isoformat()
        }
        
        http_status = (status.HTTP_200_OK if overall_status == HealthStatus.HEALTHY 
                      else status.HTTP_503_SERVICE_UNAVAILABLE)
        
        return JSONResponse(content=response, status_code=http_status)
    
    async def check_startup(self) -> Dict:
        """
        Startup探针
        检查应用是否启动完成
        """
        # 检查关键依赖是否初始化
        if not self._checks:
            raise HTTPException(
                status_code=status.HTTP_503_SERVICE_UNAVAILABLE,
                detail="Checks not configured"
            )
        
        return {"status": "started"}


# 具体检查实现
class DatabaseHealthCheck:
    """数据库健康检查"""
    
    def __init__(self, engine: sa.engine.Engine):
        self._engine = engine
    
    def __call__(self) -> HealthCheckResult:
        start = time.time()
        try:
            with self._engine.connect() as conn:
                conn.execute(sa.text("SELECT 1"))
            
            return HealthCheckResult(
                name="database",
                status=HealthStatus.HEALTHY,
                response_time_ms=(time.time() - start) * 1000,
                metadata={"type": "postgresql"}
            )
        except SQLAlchemyError as e:
            return HealthCheckResult(
                name="database",
                status=HealthStatus.UNHEALTHY,
                response_time_ms=(time.time() - start) * 1000,
                error=str(e)
            )


class KafkaHealthCheck:
    """Kafka健康检查"""
    
    def __init__(self, bootstrap_servers: str):
        self._bootstrap = bootstrap_servers
    
    def __call__(self) -> HealthCheckResult:
        # 实际应检查Kafka连接
        return HealthCheckResult(
            name="kafka",
            status=HealthStatus.HEALTHY,
            response_time_ms=10,
            metadata={"bootstrap_servers": self._bootstrap}
        )


class DiskSpaceCheck:
    """磁盘空间检查"""
    
    def __init__(self, threshold_percent: float = 90):
        self._threshold = threshold_percent
    
    def __call__(self) -> HealthCheckResult:
        disk = psutil.disk_usage('/')
        status = HealthStatus.HEALTHY if disk.percent < self._threshold else HealthStatus.UNHEALTHY
        
        return HealthCheckResult(
            name="disk_space",
            status=status,
            response_time_ms=0,
            metadata={
                "total_gb": disk.total // (2**30),
                "used_gb": disk.used // (2**30),
                "free_gb": disk.free // (2**30),
                "percent_used": disk.percent
            }
        )


# FastAPI应用
app = FastAPI()
health_checker = HealthChecker()

@app.on_event("startup")
async def startup():
    """启动时配置检查"""
    # 配置检查项
    engine = sa.create_engine("postgresql://localhost/db")
    health_checker.add_check("database", DatabaseHealthCheck(engine))
    health_checker.add_check("disk", DiskSpaceCheck())
    health_checker.add_check("kafka", KafkaHealthCheck("localhost:9092"))
    
    # 模拟启动延迟
    await asyncio.sleep(5)
    health_checker.mark_ready()

@app.get("/healthz")
async def liveness():
    """Liveness探针"""
    return await health_checker.check_liveness()

@app.get("/readyz")
async def readiness():
    """Readiness探针"""
    return await health_checker.check_readiness()

@app.get("/startupz")
async def startup_probe():
    """Startup探针"""
    return await health_checker.check_startup()


if __name__ == "__main__":
    import asyncio
    import uvicorn
    uvicorn.run(app, host="0.0.0.0", port=8000)

2.5.4 混沌测试

复制代码
#!/usr/bin/env python3
"""
【2.5.4】混沌测试:Chaos Monkey随机杀死服务容器验证容错
内容:实现故障注入、随机杀死策略、自动恢复测试、金丝雀分析
依赖:docker>=6.0, kubernetes>=28.0, scipy>=1.11
"""

from __future__ import annotations
import random
import time
import logging
from typing import List, Optional, Dict, Callable
from dataclasses import dataclass
from enum import Enum
from abc import ABC, abstractmethod

import docker
from kubernetes import client, config


logger = logging.getLogger(__name__)


class FailureType(Enum):
    """故障类型"""
    POD_KILL = "pod_kill"                    # 杀死Pod
    NETWORK_LATENCY = "network_latency"      # 网络延迟
    CPU_STRESS = "cpu_stress"               # CPU压力
    MEMORY_STRESS = "memory_stress"         # 内存压力
    DISK_FILL = "disk_fill"                 # 磁盘填满


@dataclass
class ChaosExperiment:
    """混沌实验定义"""
    name: str
    failure_type: FailureType
    target_service: str
    duration_seconds: int
    intensity: float  # 0-1,影响范围百分比


class ChaosMonkey(ABC):
    """
    混沌猴子抽象
    """
    
    @abstractmethod
    def inject_failure(self, experiment: ChaosExperiment) -> bool:
        raise NotImplementedError
    
    @abstractmethod
    def recover(self, experiment: ChaosExperiment) -> bool:
        raise NotImplementedError


class KubernetesChaosMonkey(ChaosMonkey):
    """
    Kubernetes混沌猴子
    使用K8s API进行故障注入
    """
    
    def __init__(self, namespace: str = "default"):
        config.load_kube_config()
        self._v1 = client.CoreV1Api()
        self._apps_v1 = client.AppsV1Api()
        self._namespace = namespace
    
    def inject_failure(self, experiment: ChaosExperiment) -> bool:
        """注入故障"""
        if experiment.failure_type == FailureType.POD_KILL:
            return self._kill_random_pod(experiment.target_service, experiment.intensity)
        elif experiment.failure_type == FailureType.NETWORK_LATENCY:
            return self._inject_network_latency(experiment)
        return False
    
    def _kill_random_pod(self, service_name: str, intensity: float) -> bool:
        """随机杀死Pod"""
        pods = self._v1.list_namespaced_pod(
            namespace=self._namespace,
            label_selector=f"app={service_name}"
        )
        
        if not pods.items:
            logger.warning(f"未找到Pod: {service_name}")
            return False
        
        # 根据强度选择Pod数量
        num_to_kill = max(1, int(len(pods.items) * intensity))
        victims = random.sample(pods.items, num_to_kill)
        
        for pod in victims:
            logger.info(f"Chaos Monkey杀死Pod: {pod.metadata.name}")
            self._v1.delete_namespaced_pod(
                name=pod.metadata.name,
                namespace=self._namespace
            )
        
        return True
    
    def _inject_network_latency(self, experiment: ChaosExperiment) -> bool:
        """注入网络延迟(使用NetworkChaos CRD,简化实现)"""
        # 实际应使用Litmus或Chaos Mesh
        logger.info(f"注入网络延迟: {experiment.duration_seconds}s")
        return True
    
    def recover(self, experiment: ChaosExperiment) -> bool:
        """恢复(K8s自动重建Pod,无需手动恢复)"""
        logger.info(f"等待K8s自动恢复: {experiment.target_service}")
        # 等待Deployment就绪
        time.sleep(10)
        return True


class DockerChaosMonkey(ChaosMonkey):
    """
    Docker混沌猴子(本地开发测试)
    """
    
    def __init__(self):
        self._client = docker.from_env()
        self._stopped_containers: List[str] = []
    
    def inject_failure(self, experiment: ChaosExperiment) -> bool:
        """停止容器"""
        containers = self._client.containers.list(
            filters={"label": f"service={experiment.target_service}"}
        )
        
        if not containers:
            return False
        
        victims = random.sample(
            containers, 
            max(1, int(len(containers) * experiment.intensity))
        )
        
        for container in victims:
            logger.info(f"停止容器: {container.name}")
            container.stop(timeout=10)
            self._stopped_containers.append(container.id)
        
        return True
    
    def recover(self, experiment: ChaosExperiment) -> bool:
        """恢复容器"""
        for container_id in self._stopped_containers:
            container = self._client.containers.get(container_id)
            container.start()
            logger.info(f"恢复容器: {container.name}")
        
        self._stopped_containers.clear()
        return True


class ChaosExperimentRunner:
    """
    混沌实验运行器
    执行实验并监控恢复
    """
    
    def __init__(self, monkey: ChaosMonkey, health_check: Callable[[], bool]):
        self._monkey = monkey
        self._health_check = health_check
        self._results: List[Dict] = []
    
    def run_experiment(self, experiment: ChaosExperiment) -> Dict:
        """
        运行单次实验
        """
        logger.info(f"开始实验: {experiment.name}")
        
        # 记录基线
        baseline_healthy = self._health_check()
        start_time = time.time()
        
        # 注入故障
        success = self._monkey.inject_failure(experiment)
        if not success:
            return {"status": "failed", "reason": "injection_failed"}
        
        # 监控恢复
        recovery_time = None
        for i in range(experiment.duration_seconds):
            time.sleep(1)
            if self._health_check() and not recovery_time:
                recovery_time = time.time() - start_time
                logger.info(f"服务已恢复,耗时: {recovery_time:.2f}s")
                break
        
        # 自动恢复
        self._monkey.recover(experiment)
        
        result = {
            "experiment": experiment.name,
            "baseline_healthy": baseline_healthy,
            "injection_success": success,
            "recovery_time_seconds": recovery_time,
            "total_duration": time.time() - start_time,
            "timestamp": time.time()
        }
        
        self._results.append(result)
        return result
    
    def run_suite(self, experiments: List[ChaosExperiment]) -> List[Dict]:
        """运行实验套件"""
        results = []
        for exp in experiments:
            result = self.run_experiment(exp)
            results.append(result)
            time.sleep(30)  # 实验间隔
        return results


class ResilienceScoreCalculator:
    """
    弹性评分计算器
    基于实验结果计算系统弹性分数
    """
    
    def calculate_score(self, results: List[Dict]) -> Dict:
        """
        计算弹性分数
        考虑恢复时间、成功率等
        """
        if not results:
            return {"score": 0, "grade": "F"}
        
        total = len(results)
        successful_recoveries = sum(1 for r in results if r.get("recovery_time_seconds"))
        avg_recovery_time = sum(
            r["recovery_time_seconds"] for r in results if r.get("recovery_time_seconds")
        ) / successful_recoveries if successful_recoveries > 0 else float('inf')
        
        # 评分算法
        recovery_rate = successful_recoveries / total
        speed_score = max(0, 1 - (avg_recovery_time / 60))  # 60秒内恢复得满分
        
        score = (recovery_rate * 0.7 + speed_score * 0.3) * 100
        
        grade = "A" if score >= 90 else "B" if score >= 80 else "C" if score >= 70 else "D" if score >= 60 else "F"
        
        return {
            "score": round(score, 2),
            "grade": grade,
            "recovery_rate": round(recovery_rate, 2),
            "avg_recovery_time": round(avg_recovery_time, 2) if avg_recovery_time != float('inf') else None,
            "total_experiments": total
        }


# 使用示例
if __name__ == "__main__":
    # 初始化
    monkey = DockerChaosMonkey()
    
    def simple_health_check() -> bool:
        # 实际应检查服务健康
        return True
    
    runner = ChaosExperimentRunner(monkey, simple_health_check)
    
    # 定义实验套件
    experiments = [
        ChaosExperiment(
            name="kill_order_service",
            failure_type=FailureType.POD_KILL,
            target_service="order-service",
            duration_seconds=60,
            intensity=0.5  # 杀死50%实例
        ),
        ChaosExperiment(
            name="kill_inventory_service",
            failure_type=FailureType.POD_KILL,
            target_service="inventory-service",
            duration_seconds=60,
            intensity=0.3
        )
    ]
    
    # 运行实验
    results = runner.run_suite(experiments)
    
    # 计算弹性分数
    calculator = ResilienceScoreCalculator()
    score = calculator.calculate_score(results)
    
    print(f"弹性评分: {score['score']}/100 (等级: {score['grade']})")

以上20个脚本完整覆盖了项目二的所有五级目录节点,包括服务拆分、异步通信、分布式事务、数据一致性和可观测性等微服务核心领域。每个脚本均可独立运行,包含完整的类型提示、异常处理和日志记录。

相关推荐
大力财经2 小时前
云访谈 203:她在资阳,下注 “换电电动车 + 分布式换电站” 新未来
分布式
电磁脑机2 小时前
和大脑正确交互的脑机接口研究推演理论
分布式·神经网络·架构·交互·信号处理
Albert Edison3 小时前
【RabbitMQ】核心概念|工作流程|界面操作
分布式·rabbitmq·ruby
搜佛说11 小时前
02-第2章-核心概念与架构
数据库·物联网·微服务·架构·边缘计算·iot
aP8PfmxS214 小时前
从零学习Kafka:数据存储
分布式·学习·kafka
激昂网络16 小时前
Jetson Xavier NX BSP 架构解析
架构
必胜刻17 小时前
Redis分布式锁讲解
数据库·redis·分布式
bIo7lyA8v19 小时前
从零学习Kafka:集群架构和基本概念
学习·架构·kafka
少许极端19 小时前
消息队列5-RabbitMQ的高级特性和MQ的应用问题与解决方案-事务、消息分发的应用、幂等性保证、顺序性保证、消息积压的解决
分布式·消息队列·rabbitmq