深度剖析 Fantasy 框架的多协议网络通信(一):TCP 可靠传输与连接管控
分布式系统中,网络通信的核心诉求之一是TCP 式的可靠传输,它能为数据一致性场景(如账号登录、交易确认)提供无丢失、按序到达的底层保障。
作为分布式原生框架,Fantasy
原生集成 TCP 协议栈,开发者无需为适配 TCP 单独搭建通信链路,可直接基于框架封装的接口实现可靠传输。
以下将基于源码拆解,围绕协议抽象接口与 TCP 实现细节,解析框架如何落地 TCP 的可靠性与连接全生命周期管理。
框架设计概览:多协议通信的整体架构
Fantasy
框架多协议通信采用 "抽象统一 - 实现差异" 分层架构,通过 "接口 + 抽象基类 + 具体实现" 模式,实现协议特性与业务逻辑解耦,兼顾通信一致性与各协议独特优势。
核心组件关系图谱
从代码结构看,TCP
通信机制的核心组件涵盖连接建立到消息处理全流程,可归纳为五类:
协议抽象层 :以 INetworkChannel
为核心定义网络通道通用交互接口;以 ANetwork
为核心提供网络管理基础能力;以 AClientNetwork
为核心封装客户端网络的连接、断开等标准行为。
协议实现层 :基于抽象层的具体 TCP 协议实现的 TCPServerNetwork
/TCPClientNetwork
/TCPServerNetworkChannel
,负责流式传输的粘包拆包与可靠连接管理。
连接管理层 :以 Session
为核心封装连接上下文(如定位客户端的RemoteEndPoint
、关联底层传输的Channel
、用于心跳检测的LastReceiveTime
等),同时提供Send
(消息发送)、Receive
(消息接收)接口作为业务层与协议层的交互载体。其派生类ProcessSession
用于服务器内部节点通信,强化了路由消息处理能力。Session
的生命周期由网络通道(如ANetworkServerChannel
)管理,在通道初始化时创建Session
,断开时销毁。
数据处理层:以数据的 "编解码 - 调度 - 分发" 全链路处理为核心,封装协议数据与业务消息的转换逻辑:
- 编解码:
PacketParserFactory
创建不同类型的解析器(如ReadOnlyMemoryPacketParser
、BufferPacketParser
),实现数据包的封装(Pack
)与解包(UnPack
),适配不同协议的字节流格式; - 消息调度:
InnerMessageScheduler
(内部消息)、ClientMessageScheduler
(客户端消息)等实现消息的异步调度与分发,将解析后的消息路由至对应业务处理器; - 消息处理:
MessageDispatcherComponent
维护消息类型与处理器的映射,通过IMessageHandler
、IRouteMessageHandler
接口规范业务逻辑实现,确保消息按预设规则被处理。
辅助层 :NetworkType
/NetworkTarget
枚举定义网络角色(客户端 / 服务器、对内 / 对外)与协议类型(TCP/KCP等);MemoryStreamBufferPool
实现内存流缓冲区复用;Scene.ThreadSynchronizationContext
确保消息在场景线程中安全处理。
这些组件的协作流程可细化为 "发送 - 接收 - 处理" 全链路,各环节与核心组件精准对应:
TCP/KCP 通信的全链路协作流程可概括为:
业务层触发连接/发送需求
→ NetworkProtocolFactory 创建 TCP/KCP 协议实例
→ Session 封装连接上下文并绑定协议通道
→ 协议实现层处理传输特性(TCP 粘包/KCP 重传)
→ 数据通过 Socket 收发
→ 解析器解码消息
→ 消息调度器转发至业务处理器
→ 业务逻辑执行
协议抽象:多协议通信的 "通用契约"
Fantasy
多协议适配的核心是 "同构接口,异构实现",通过抽象接口与类的体系,定义了所有协议的行为契约,以及封装了其间的共性逻辑,减少了重复编码。
INetworkChannel接口:网络通道的统一交互标准
INetworkChannel
是框架所有网络通道的顶层交互标准,旨在剥离底层协议差异,为 TCP、KCP、WebSocket 等所有网络通道制定统一交互标准,确保上层业务都能通过该接口无差别操作连接。其源码位于Runtime/Core/Network/Protocol/Interface/INetworkChannel.cs
:
arduino
using System;
using System.IO;
using Fantasy.Serialize;
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
namespace Fantasy.Network.Interface
{
public interface INetworkChannel : IDisposable
{
public Session Session { get;}
public bool IsDisposed { get;}
public void Send(uint rpcId, long routeId, MemoryStreamBuffer memoryStream, IMessage message);
}
}
INetworkChannel
的核心价值:从协议差异到统一交互
INetworkChannel
通过三个核心成员,构建 "底层异构、上层同构" 的交互体系,覆盖客户端与服务器端的共性需求:
连接上下文的统一入口 :Session Session { get; }
强制所有通道关联Session
对象,封装连接的核心上下文。无论是客户端 "一对一" 连接(如TCPClientNetwork
),还是服务器端 "一对多" 连接(如TCPServerNetworkChannel
),上层业务都能通过channel.Session
获取一致的连接信息,无需区分协议类型(TCP/KCP)或网络角色(客户端 / 服务器)。
生命周期状态的标准化标识 :bool IsDisposed { get; }
配合IDisposable
接口,定义统一的连接释放状态。无论协议特性如何(TCP 需关闭 Socket、KCP 需清理会话等),业务层都能通过IsDisposed
判断连接有效性,避免对已释放通道执行无效操作(如重复发送、销毁)。
消息发送的跨协议统一接口 :void Send(...)
方法以固定参数列表(远程调用标识、路由标识、内存缓冲区、消息体),构建了协议无关的发送契约,不论底层协议差异如何(TCP 的粘包处理、KCP 的重传机制、WebSocket 的帧封装等),业务层只需调用Send
,即可适配所有协议。
ANetworkServerChannel:服务器端网络通道的 "特化抽象骨架"
ANetworkServerChannel
是服务器端所有网络通道的基类,实现INetworkChannel
接口以对齐通用交互标准,针对服务器端多连接管理、连接生命周期与全局联动特性,封装专属属性与基础逻辑,为TCPServerNetworkChannel
、KCPServerNetworkChannel
等协议实现提供统一骨架,是通用接口契约与服务器场景需求的关键衔接层。其源码位于Runtime/Core/Network/Protocol/ANetworkServerChannel.cs
:
csharp
#if FANTASY_NET
using System.IO;
using System.Net;
using Fantasy.Serialize;
#pragma warning disable CS8618 // Non-nullable field must contain a non-null value when exiting constructor. Consider declaring as nullable.
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
namespace Fantasy.Network.Interface
{
public abstract class ANetworkServerChannel : INetworkChannel
{
/// <summary>
/// 获取通道的唯一标识 ID。
/// </summary>
public readonly uint Id;
/// <summary>
/// 获取通道的远程终端点。
/// </summary>
public readonly EndPoint RemoteEndPoint;
/// <summary>
/// 获取或设置通道所属的场景。
/// </summary>
public Scene Scene { get; protected set; }
/// <summary>
/// 获取或设置通道所属的会话。
/// </summary>
public Session Session { get; protected set; }
/// <summary>
/// 获取通道是否已经被释放。
/// </summary>
public bool IsDisposed { get; protected set; }
protected ANetworkServerChannel(ANetwork network, uint id, EndPoint remoteEndPoint)
{
Id = id;
Scene = network.Scene;
RemoteEndPoint = remoteEndPoint;
Session = Session.Create(network.NetworkMessageScheduler, this, network.NetworkTarget);
}
public virtual void Dispose()
{
IsDisposed = true;
if (!Session.IsDisposed)
{
Session.Dispose();
}
}
public abstract void Send(uint rpcId, long routeId, MemoryStreamBuffer memoryStream, IMessage message);
}
}
#endif
ANetworkServerChannel的核心价值:从通用契约到服务器端特化管理
连接基础信息的体系化承载:通过构造逻辑与只读属性,为服务器端单连接构建 "可定位、可溯源、可关联" 的基础信息链,直接支撑多连接并发管理:
- uint Id -- 连接定位的唯一标识 :由全局网络管理类(如
TCPServerNetwork
)生成带协议特征的唯一值(如TCP 协议用0xC0000000 | 随机数
),构造时传入并固化为只读属性;作为_connectionChannel
字典的键值,是全局查找、定向销毁连接的 "定位锚点",确保多连接环境下无标识冲突。 - 连接溯源的EndPoint依据 :从
Socket
连接的远程端点(如SocketAsyncEventArgs.AcceptSocket.RemoteEndPoint
)获取,构造时传入并固化,存储客户端 IP 与端口信息。为服务器端日志监控(如 "某 IP 连接异常")、连接权限校验(如黑名单拦截)提供溯源依据。 - 连接与场景Scene的绑定桥梁 :继承自全局网络实例(
network.Scene
),将通道与框架的 "场景" 概念强绑定,确保后续消息处理、资源操作均在对应场景线程中执行,避免多线程操作引发的数据冲突。 - 业务交互的统一Session入口 :构造时通过
Session.Create(network.NetworkMessageScheduler, this, network.NetworkTarget)
自动初始化,关联全局消息调度器与通信范围(对内 / 对外)。无论底层什么协议,上层业务均通过Session
收发消息,实现 "业务与协议解耦"。
协议扩展的平衡设计 :通过 "基础逻辑封装 + 虚 / 抽象方法预留",兼容不同协议的实现差异,虚方法Dispose
允许派生类补充协议特有资源释放逻辑,例如TCPServerNetworkChannel
重写时会关闭Socket
、清空发送缓冲区,KCPServerNetworkChannel
则清理重传队列与会话状态,兼容不同协议的底层资源特性;以及利用抽象方法Send
,强制派生类实现协议特有发送逻辑(如 TCP 的粘包处理、KCP 的帧封装与重传控制)。
ANetwork与AClientNetwork抽象类:共性能力封装与分层约束
网络角色标识:场景划分的核心维度
NetworkType
(客户端 / 服务器角色)、NetworkTarget
(对内 / 对外范围)、NetworkProtocolType
(TCP/KCP 等协议类型)作为ANetwork
中定义的核心标识。其源码位于 Runtime/Core/Network/Protocol/NetworkProtocolType.cs
:
csharp
namespace Fantasy.Network
{
/// <summary>
/// 网络服务器类型
/// </summary>
public enum NetworkType
{
/// <summary>
/// 默认
/// </summary>
None = 0,
/// <summary>
/// 客户端网络
/// </summary>
Client = 1,
#if FANTASY_NET
/// <summary>
/// 服务器网络
/// </summary>
Server = 2
#endif
}
/// <summary>
/// 网络服务的目标
/// </summary>
public enum NetworkTarget
{
/// <summary>
/// 默认
/// </summary>
None = 0,
/// <summary>
/// 对外
/// </summary>
Outer = 1,
#if FANTASY_NET
/// <summary>
/// 对内
/// </summary>
Inner = 2
#endif
}
/// <summary>
/// 支持的网络协议
/// </summary>
public enum NetworkProtocolType
{
/// <summary>
/// 默认
/// </summary>
None = 0,
/// <summary>
/// KCP
/// </summary>
KCP = 1,
/// <summary>
/// TCP
/// </summary>
TCP = 2,
/// <summary>
/// WebSocket
/// </summary>
WebSocket = 3,
/// <summary>
/// HTTP
/// </summary>
HTTP = 4,
}
}
ANetwork:网络能力的基础骨架与共性封装
ANetwork
作为网络核心管理类的基类,封装了网络模块的核心属性与基础能力,通过 "抽象定义 + 子类实现" 统一网络管理骨架(如客户端 / 服务端的网络逻辑共性)。其源码位于 Runtime/Core/Network/Protocol/Interface/ANetwork.cs
:
ini
using System;
using System.Collections.Concurrent;
using System.Collections.Generic;
using System.IO;
using Fantasy.Entitas;
using Fantasy.PacketParser;
using Fantasy.Scheduler;
using Fantasy.Serialize;
#pragma warning disable CS8618 // Non-nullable field must contain a non-null value when exiting constructor. Consider declaring as nullable.
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
namespace Fantasy.Network.Interface
{
/// <summary>
/// 抽象网络基类。
/// </summary>
public abstract class ANetwork : Entity
{
private long _outerPackInfoId;
private Queue<OuterPackInfo> _outerPackInfoPool;
public readonly MemoryStreamBufferPool MemoryStreamBufferPool = new MemoryStreamBufferPool();
public NetworkType NetworkType { get; private set; }
public NetworkTarget NetworkTarget { get; private set; }
public NetworkProtocolType NetworkProtocolType { get; private set; }
public ANetworkMessageScheduler NetworkMessageScheduler { get; private set; }
protected void Initialize(NetworkType networkType, NetworkProtocolType networkProtocolType, NetworkTarget networkTarget)
{
NetworkType = networkType;
NetworkTarget = networkTarget;
NetworkProtocolType = networkProtocolType;
#if FANTASY_NET
if (networkProtocolType == NetworkProtocolType.HTTP)
{
return;
}
if (networkTarget == NetworkTarget.Inner)
{
_innerPackInfoPool = new Queue<InnerPackInfo>();
NetworkMessageScheduler = new InnerMessageScheduler(Scene);
return;
}
#endif
switch (networkType)
{
case NetworkType.Client:
{
_outerPackInfoPool = new Queue<OuterPackInfo>();
NetworkMessageScheduler = new ClientMessageScheduler(Scene);
break;
}
#if FANTASY_NET
case NetworkType.Server:
{
_outerPackInfoPool = new Queue<OuterPackInfo>();
NetworkMessageScheduler = new OuterMessageScheduler(Scene);
break;
}
#endif
}
}
public abstract void RemoveChannel(uint channelId);
public OuterPackInfo RentOuterPackInfo()
{
if (_outerPackInfoPool.Count == 0)
{
return new OuterPackInfo()
{
PackInfoId = ++_outerPackInfoId
};
}
if (!_outerPackInfoPool.TryDequeue(out var outerPackInfo))
{
return new OuterPackInfo()
{
PackInfoId = ++_outerPackInfoId
};
}
outerPackInfo.PackInfoId = ++_outerPackInfoId;
return outerPackInfo;
}
public void ReturnOuterPackInfo(OuterPackInfo outerPackInfo)
{
if (_outerPackInfoPool.Count > 512)
{
// 池子里最多缓存256个、其实这样设置有点多了、其实用不了512个。
// 反而设置越大内存会占用越多。
return;
}
_outerPackInfoPool.Enqueue(outerPackInfo);
}
#if FANTASY_NET
private long _innerPackInfoId;
private Queue<InnerPackInfo> _innerPackInfoPool;
public InnerPackInfo RentInnerPackInfo()
{
if (_innerPackInfoPool.Count == 0)
{
return new InnerPackInfo()
{
PackInfoId = ++_innerPackInfoId
};
}
if (!_innerPackInfoPool.TryDequeue(out var innerPackInfo))
{
return new InnerPackInfo()
{
PackInfoId = ++_innerPackInfoId
};
}
innerPackInfo.PackInfoId = ++_innerPackInfoId;
return innerPackInfo;
}
public void ReturnInnerPackInfo(InnerPackInfo innerPackInfo)
{
if (_innerPackInfoPool.Count > 256)
{
// 池子里最多缓存256个、其实这样设置有点多了、其实用不了256个。
// 反而设置越大内存会占用越多。
return;
}
_innerPackInfoPool.Enqueue(innerPackInfo);
}
#endif
public override void Dispose()
{
NetworkType = NetworkType.None;
NetworkTarget = NetworkTarget.None;
NetworkProtocolType = NetworkProtocolType.None;
MemoryStreamBufferPool.Dispose();
_outerPackInfoPool?.Clear();
#if FANTASY_NET
_innerPackInfoPool?.Clear();
#endif
base.Dispose();
}
}
}
类的核心价值:从结构统一到多协议协同
网络标识的标准化定义 :通过NetworkType
、NetworkTarget
、NetworkProtocolType
三个核心属性,构建了网络实例的 "身份标识体系";框架通过该标识精准区分网络场景(如 "客户端对外的 TCP 连接"、"服务器对内的 KCP 通信"),为后续差异化处理(如消息调度、数据包格式)提供判断依据。
通用能力的封装与复用:作为所有网络实现的基类,封装多协议共需基础能力:
- 数据包池化管理 :通过
RentOuterPackInfo
/ReturnOuterPackInfo
(对外通信)和RentInnerPackInfo
/ReturnInnerPackInfo
(对内通信,仅 Net 环境)实现OuterPackInfo
/InnerPackInfo
对象的复用。池大小限制(对外 512、对内 256)平衡了内存占用与对象创建开销,减少高频通信场景的 GC。 - 内存资源复用 :内置
MemoryStreamBufferPool
管理网络传输缓冲区,发送 / 接收数据时从池租赁MemoryStreamBuffer
,使用后归还,避免MemoryStream
频繁创建销毁导致的性能损耗。 - 消息调度器适配 :在
Initialize
方法中,根据网络标识自动绑定对应调度器(如客户端绑定ClientMessageScheduler
,内部通信绑定InnerMessageScheduler
),统一消息分发的线程模型。
生命周期的规范化管理 :通过Initialize
与Dispose
方法定义网络实例的完整生命周期:
Initialize
根据网络类型、协议类型和通信范围初始化资源(如创建数据包池、绑定调度器),确保不同场景的网络实例启动时状态一致;Dispose
方法统一释放资源:重置标识属性、销毁内存流池、清空数据包池,避免资源泄露。后续子类可在此基础上扩展协议特有资源的释放(如 Socket 连接关闭)。
扩展点的预留与约束 :RemoveChannel
抽象方法强制子类实现通道移除逻辑以保证上层调用的统一性(所有协议均通过该方法移除通道),同时允许子类根据协议特性定制实现。
AClientNetwork:客户端网络的行为特化与分层约束
AClientNetwork
作为客户端网络的基类,继承ANetwork
复用共性资源能力,实现INetworkChannel
对齐交互标准,通过抽象方法约束客户端特有逻辑(主动连接、协议专属发送),统一客户端网络管理骨架。其源码位于 Runtime/Core/Network/Protocol/Interface/AClientNetwork.cs
:
csharp
using System;
using System.IO;
using Fantasy.Serialize;
// ReSharper disable ConditionIsAlwaysTrueOrFalseAccordingToNullableAPIContract
#pragma warning disable CS8625 // Cannot convert null literal to non-nullable reference type.
#pragma warning disable CS8618 // Non-nullable field must contain a non-null value when exiting constructor. Consider declaring as nullable.
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
namespace Fantasy.Network.Interface
{
/// <summary>
/// 抽象客户端网络基类。
/// </summary>
public abstract class AClientNetwork : ANetwork, INetworkChannel
{
protected bool IsInit;
public Session Session { get; protected set; }
public abstract Session Connect(string remoteAddress, Action onConnectComplete, Action onConnectFail, Action onConnectDisconnect, bool isHttps, int connectTimeout = 5000);
public abstract void Send(uint rpcId, long routeId, MemoryStreamBuffer memoryStream, IMessage message);
public override void Dispose()
{
IsInit = false;
if (Session != null)
{
if (!Session.IsDisposed)
{
Session.Dispose();
}
Session = null;
}
base.Dispose();
}
}
}
类的核心价值:客户端网络的抽象规范与约束框架
连接上下文的标准载体设计 :通过INetworkChannel
的Session
属性(public Session Session { get; protected set; }
),定义了统一的连接信息载体,对外提供只读访问(确保上层获取连接信息的一致性),对内开放protected set
(为子类预留赋值入口),为所有客户端协议(TCP/KCP 等)预设 "连接上下文存储" 的标准;IsInit
作为状态标记,规范初始化状态的判断依据,避免子类重复设计状态管理逻辑
核心行为的接口形式约束 :以Connect
(含远程地址、回调、超时等参数)和Send
(对齐INetworkChannel
参数)抽象方法,仅定义客户端 "连接" 与 "发送" 的接口形式(参数列表、返回值),不包含任何协议特有实现逻辑。强制子类必须实现具体协议的连接 / 发送细节(如 TCP 握手、KCP 会话建立),但确保上层调用方式完全统一,实现业务与协议的解耦。
核心协议的特化实现拆解:从抽象到具象的落地
前述基于INetworkChannel
接口、ANetwork
/AClientNetwork
抽象类分析了多协议通信的 "统一骨架",但不同协议的核心价值(如 TCP 的可靠、KCP 的低延迟等),需通过特化实现落地。因此以下开始拆解 TCP、KCP 等的具体实现,填补 "抽象约定" 到 "实际通信" 的空白。
TCPServerNetwork
:TCP 服务器的 "全局连接管控中枢"
作为ANetwork
的 TCP 服务器特化,负责 TCP 端口监听、异步接受客户端连接,通过_connectionChannel
字典管理所有TCPServerNetworkChannel
通道,依托SocketAsyncEventArgs
优化性能并通过场景线程同步保障安全,销毁时清理所有连接资源,是 TCP 服务器高并发连接管理的核心载体。其源码位于 Runtime/Core/Network/Protocol/TCP/Server/TCPServerNetwork.cs
:
csharp
#if FANTASY_NET
using System.Net;
using System.Net.Sockets;
using Fantasy.Helper;
using Fantasy.Network.Interface;
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
// ReSharper disable GCSuppressFinalizeForTypeWithoutDestructor
#pragma warning disable CS8622 // Nullability of reference types in type of parameter doesn't match the target delegate (possibly because of nullability attributes).
#pragma warning disable CS8625 // Cannot convert null literal to non-nullable reference type.
#pragma warning disable CS8618 // Non-nullable field must contain a non-null value when exiting constructor. Consider declaring as nullable.
namespace Fantasy.Network.TCP
{
public sealed class TCPServerNetwork : ANetwork
{
private Random _random;
private Socket _socket;
private SocketAsyncEventArgs _acceptAsync;
private readonly Dictionary<uint, INetworkChannel> _connectionChannel = new Dictionary<uint, INetworkChannel>();
public void Initialize(NetworkTarget networkTarget, IPEndPoint address)
{
base.Initialize(NetworkType.Server, NetworkProtocolType.TCP, networkTarget);
_random = new Random();
_acceptAsync = new SocketAsyncEventArgs();
_socket = new Socket(address.AddressFamily, SocketType.Stream, ProtocolType.Tcp);
_socket.SetSocketOption(SocketOptionLevel.Socket, SocketOptionName.ReuseAddress, false);
if (address.AddressFamily == AddressFamily.InterNetworkV6)
{
_socket.SetSocketOption(SocketOptionLevel.IPv6, SocketOptionName.IPv6Only, false);
}
_socket.Bind(address);
_socket.Listen(int.MaxValue);
_socket.SetSocketBufferToOsLimit();
Log.Info($"SceneConfigId = {Scene.SceneConfigId} networkTarget = {networkTarget.ToString()} TCPServer Listen {address}");
_acceptAsync.Completed += OnCompleted;
AcceptAsync();
}
public override void Dispose()
{
if (IsDisposed)
{
return;
}
try
{
foreach (var networkChannel in _connectionChannel.Values.ToArray())
{
networkChannel.Dispose();
}
_connectionChannel.Clear();
_random = null;
_socket.Dispose();
_socket = null;
_acceptAsync.Dispose();
_acceptAsync = null;
GC.SuppressFinalize(this);
}
catch (Exception e)
{
Log.Error(e);
}
finally
{
base.Dispose();
}
}
private void AcceptAsync()
{
_acceptAsync.AcceptSocket = null;
if (_socket.AcceptAsync(_acceptAsync))
{
return;
}
OnAcceptComplete(_acceptAsync);
}
private void OnAcceptComplete(SocketAsyncEventArgs asyncEventArgs)
{
if (asyncEventArgs.AcceptSocket == null)
{
return;
}
if (asyncEventArgs.SocketError != SocketError.Success)
{
Log.Error($"Socket Accept Error: {_acceptAsync.SocketError}");
return;
}
try
{
uint channelId;
do
{
channelId = 0xC0000000 | (uint)_random.Next();
} while (_connectionChannel.ContainsKey(channelId));
_connectionChannel.Add(channelId, new TCPServerNetworkChannel(this, asyncEventArgs.AcceptSocket, channelId));
}
catch (Exception e)
{
Log.Error(e);
}
finally
{
AcceptAsync();
}
}
public override void RemoveChannel(uint channelId)
{
if (IsDisposed || !_connectionChannel.Remove(channelId, out var channel))
{
return;
}
if (channel.IsDisposed)
{
return;
}
channel.Dispose();
}
#region 网络线程(由Socket底层产生的线程)
private void OnCompleted(object sender, SocketAsyncEventArgs asyncEventArgs)
{
switch (asyncEventArgs.LastOperation)
{
case SocketAsyncOperation.Accept:
{
Scene.ThreadSynchronizationContext.Post(() =>
{
OnAcceptComplete(asyncEventArgs);
});
break;
}
default:
{
throw new Exception($"Socket Accept Error: {asyncEventArgs.LastOperation}");
}
}
}
#endregion
}
}
#endif
类的核心价值:从服务器特化到 TCP 连接全局管控
多连接上下文的标识与管理体系 :通过NetworkType.Server
与NetworkProtocolType.TCP
锚定服务器角色,依托_connectionChannel
字典(Dictionary<uint, INetworkChannel>
)构建 "channelId
- 通道" 映射关系,支撑多客户端并发连接场景。channelId
采用0xC0000000
高位标识,确保与其他协议通道的唯一区分,为上层业务定位特定 TCP 连接提供精准依据。
逻辑的封装与复用:TCP 服务器的核心实现,聚焦多连接管理的共性能力封装:
- 连接全流程管控 :通过
Initialize
完成 Socket 初始化(绑定地址、设置缓冲区、启动监听),其中_socket.SetSocketOption
的两次关键配置为跨环境兼容奠定基础(SocketOptionName.ReuseAddress = false
,禁用地址复用,避免多进程 / 实例绑定同一端口导致的连接路由冲突,确保服务器端口独占性;若为 IPv6 地址族,SocketOptionName.IPv6Only = false
,允许服务器同时处理 IPv4 和 IPv6 客户端连接,无需单独启动双栈监听实例,降低跨网络环境部署复杂度),结合AcceptAsync
开启的异步连接接受循环、OnAcceptComplete
的通道创建逻辑,形成 "监听 - 接入 - 维护" 的完整链路,子类无需重复设计连接接入逻辑。 - 异步事件的线程安全调度 :
OnCompleted
方法作为SocketAsyncEventArgs
的完成事件回调,承担着 "网络线程与业务线程桥接" 的核心作用,当AcceptAsync
异步接受连接操作完成(返回true
时),底层 Socket 会在 IO 线程触发Completed
事件,此时OnCompleted
会检查操作类型(仅处理SocketAsyncOperation.Accept
),并通过Scene.ThreadSynchronizationContext.Post
将OnAcceptComplete
的执行逻辑调度至场景线程(业务线程),避免多线程直接操作_connectionChannel
(共享资源)导致的数据冲突,确保连接处理的线程安全性,是高并发场景下数据一致性的关键保障。 - 基类资源复用 :继承
ANetwork
的MemoryStreamBufferPool
和OuterPackInfo
池化能力,通道创建与数据传输时可直接租赁资源,降低高频连接场景的对象创建与 GC 开销,无需单独实现资源池管理。
生命周期的服务器特化管理 :在ANetwork
生命周期基础上,Dispose
调用base.Dispose()
清理基类资源,同时通过三重操作实现彻底资源释放:
- 遍历销毁
_connectionChannel
中所有活跃通道,确保客户端连接资源(Socket、缓冲区)被释放; - 关闭监听
Socket
、释放SocketAsyncEventArgs
等底层资源,避免端口占用或句柄泄漏; - 调用
GC.SuppressFinalize(this)
通知垃圾回收器(GC)"当前对象已通过Dispose
主动释放所有资源,无需再执行终结器(Finalizer)"。减少 GC 对对象的二次扫描,加速资源回收周期,避免服务器端因终结器延迟导致的资源冗余。
扩展点的约束与统一 :通过实现ANetwork
的抽象方法,强制子类遵循服务器端连接管理规范;同时允许TCPServerNetworkChannel
实现 TCP 特化的收发逻辑(如粘包处理),保持服务器全局管控逻辑的统一性,同时为协议细节扩展预留灵活空间。
TCPServerNetworkChannel:TCP 服务器的 "单连接处理单元"
作为ANetworkServerChannel
的密封(sealed)TCP 协议特化实现,承接单个客户端 Socket 连接的全生命周期,核心通过 "Pipe 流处理 + 协议解析 + 异步发送队列" 实现 TCP 字节流的可靠交互,是 TCP 服务器 "全局管控→单连接落地" 的核心执行载体。其源码位于 Runtime/Core/Network/Protocol/TCP/Server/TCPServerNetworkChannel.cs
:
ini
#if FANTASY_NET
using System.Buffers;
using System.IO.Pipelines;
using System.Net.Sockets;
using Fantasy.Async;
using Fantasy.Network.Interface;
using Fantasy.PacketParser;
using Fantasy.Serialize;
// ReSharper disable ConditionIsAlwaysTrueOrFalseAccordingToNullableAPIContract
#pragma warning disable CS8625 // Cannot convert null literal to non-nullable reference type.
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
#pragma warning disable CS8600 // Converting null literal or possible null value to non-nullable type.
#pragma warning disable CS8602 // Dereference of a possibly null reference.
#pragma warning disable CS8604 // Possible null reference argument.
#pragma warning disable CS8622 // Nullability of reference types in type of parameter doesn't match the target delegate (possibly because of nullability attributes).
namespace Fantasy.Network.TCP
{
public sealed class TCPServerNetworkChannel : ANetworkServerChannel
{
private bool _isSending;
private bool _isInnerDispose;
private readonly Socket _socket;
private readonly ANetwork _network;
private readonly Pipe _pipe = new Pipe();
private readonly SocketAsyncEventArgs _sendArgs;
private readonly ReadOnlyMemoryPacketParser _packetParser;
private readonly Queue<MemoryStreamBuffer> _sendBuffers = new Queue<MemoryStreamBuffer>();
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
public TCPServerNetworkChannel(ANetwork network, Socket socket, uint id) : base(network, id, socket.RemoteEndPoint)
{
_socket = socket;
_network = network;
_socket.NoDelay = true;
_sendArgs = new SocketAsyncEventArgs();
_sendArgs.Completed += OnSendCompletedHandler;
_packetParser = PacketParserFactory.CreateServerReadOnlyMemoryPacket(network);
ReadPipeDataAsync().Coroutine();
ReceiveSocketAsync().Coroutine();
}
public override void Dispose()
{
if (IsDisposed || _isInnerDispose)
{
return;
}
_isInnerDispose = true;
_network.RemoveChannel(Id);
if (!_cancellationTokenSource.IsCancellationRequested)
{
try
{
_cancellationTokenSource.Cancel();
}
catch (OperationCanceledException)
{
// 通常情况下,此处的异常可以忽略
}
}
if (_socket != null)
{
_socket.Shutdown(SocketShutdown.Both);
_socket.Close();
}
_sendBuffers.Clear();
_packetParser.Dispose();
_isSending = false;
base.Dispose();
}
#region ReceiveSocket
private async FTask ReceiveSocketAsync()
{
while (!_cancellationTokenSource.IsCancellationRequested)
{
try
{
var memory = _pipe.Writer.GetMemory(8192);
var count = await _socket.ReceiveAsync(memory, SocketFlags.None, _cancellationTokenSource.Token);
if (count == 0)
{
Dispose();
return;
}
_pipe.Writer.Advance(count);
await _pipe.Writer.FlushAsync();
}
catch (SocketException)
{
Dispose();
break;
}
catch (OperationCanceledException)
{
break;
}
catch (ObjectDisposedException)
{
Dispose();
break;
}
catch (Exception ex)
{
Log.Error($"Unexpected exception: {ex.Message}");
}
}
await _pipe.Writer.CompleteAsync();
}
#endregion
#region ReceivePipeData
private async FTask ReadPipeDataAsync()
{
var pipeReader = _pipe.Reader;
while (!_cancellationTokenSource.IsCancellationRequested)
{
ReadResult result = default;
try
{
result = await pipeReader.ReadAsync(_cancellationTokenSource.Token);
}
catch (OperationCanceledException)
{
// 出现这个异常表示取消了_cancellationTokenSource。一般Channel断开会取消。
break;
}
var buffer = result.Buffer;
var consumed = buffer.Start;
var examined = buffer.End;
while (TryReadMessage(ref buffer, out var message))
{
ReceiveData(ref message);
consumed = buffer.Start;
}
if (result.IsCompleted)
{
break;
}
pipeReader.AdvanceTo(consumed, examined);
}
await pipeReader.CompleteAsync();
}
private bool TryReadMessage(ref ReadOnlySequence<byte> buffer, out ReadOnlyMemory<byte> message)
{
if (buffer.Length == 0)
{
message = default;
return false;
}
message = buffer.First;
if (message.Length == 0)
{
message = default;
return false;
}
buffer = buffer.Slice(message.Length);
return true;
}
private void ReceiveData(ref ReadOnlyMemory<byte> buffer)
{
try
{
while (_packetParser.UnPack(ref buffer, out var packInfo))
{
if (_cancellationTokenSource.IsCancellationRequested)
{
return;
}
Session.Receive(packInfo);
}
}
catch (ScanException e)
{
Log.Warning($"RemoteAddress:{RemoteEndPoint} \n{e}");
Dispose();
}
catch (Exception e)
{
Log.Error($"RemoteAddress:{RemoteEndPoint} \n{e}");
Dispose();
}
}
#endregion
#region Send
public override void Send(uint rpcId, long routeId, MemoryStreamBuffer memoryStream, IMessage message)
{
_sendBuffers.Enqueue(_packetParser.Pack(ref rpcId, ref routeId, memoryStream, message));
if (!_isSending)
{
Send();
}
}
private void Send()
{
if (_isSending || IsDisposed)
{
return;
}
_isSending = true;
while (_sendBuffers.Count > 0)
{
var memoryStreamBuffer = _sendBuffers.Dequeue();
_sendArgs.UserToken = memoryStreamBuffer;
_sendArgs.SetBuffer(new ArraySegment<byte>(memoryStreamBuffer.GetBuffer(), 0, (int)memoryStreamBuffer.Position));
try
{
if (_socket.SendAsync(_sendArgs))
{
break;
}
ReturnMemoryStream(memoryStreamBuffer);
}
catch
{
_isSending = false;
return;
}
}
_isSending = false;
}
private void ReturnMemoryStream(MemoryStreamBuffer memoryStream)
{
if (memoryStream.MemoryStreamBufferSource == MemoryStreamBufferSource.Pack)
{
_network.MemoryStreamBufferPool.ReturnMemoryStream(memoryStream);
}
}
private void OnSendCompletedHandler(object sender, SocketAsyncEventArgs asyncEventArgs)
{
if (asyncEventArgs.SocketError != SocketError.Success || asyncEventArgs.BytesTransferred == 0)
{
_isSending = false;
return;
}
var memoryStreamBuffer = (MemoryStreamBuffer)asyncEventArgs.UserToken;
Scene.ThreadSynchronizationContext.Post(() =>
{
ReturnMemoryStream(memoryStreamBuffer);
if (_sendBuffers.Count > 0)
{
Send();
}
else
{
_isSending = false;
}
});
}
#endregion
}
}
#endif
类的核心价值:TCP 服务器端单连接的专属交互中枢
基类标识的无缝承接 :通过构造函数base(network, id, socket.RemoteEndPoint)
,直接沿用uint Id
(全局唯一连接 ID,能在_connectionChannel
字典中精准找到该连接)、EndPoint RemoteEndPoint
(客户端 IP 与端口,方便日志溯源和异常定位)、Scene Scene
(绑定业务线程,保障数据处理的线程安全)以及Session
(基类封装的业务交互入口),让单连接自然融入服务器的全局管理体系。
TCP 特有组件的设计:类内私有字段的针对性初始化解析:
bool _isSending
:TCP 发送流程的 "互斥锁",动态标记当前是否处于发送状态以避免多线程并发调用Send
导致字节流乱序;bool _isInnerDispose
:私有销毁标识,用于标记类内是否已触发销毁逻辑,与基类IsDisposed
配合形成双重校验,避免多线程下重复调用Dispose
导致资源重复释放(如Socket
重复关闭、Pipe
重复完成);Socket _socket
:绑定客户端的TCP连接Socket,设置_socket.NoDelay = true
(禁用 Nagle 算法)来避免小数据包合并发送导致的延迟,适配实时通信场景;Pipe _pipe
,TCP "字节流碎片化" 的核心解决方案,鉴于TCP 数据常拆段发送(如 "登录请求" 拆为 20B+30B)或合并到达(如 "登录 + 聊天" 消息粘包),Pipe
通过 "写端存数据(ReceiveSocketAsync
写入)、读端拆数据(ReadPipeDataAsync
读取)" 的双端实现,解耦 "接收" 与 "解析" 的异步流程,避免直接操作字节流的混乱;SocketAsyncEventArgs _sendArgs
:复用的异步发送载体,绑定OnSendCompletedHandler
回调,高频发送时无需重复创建对象,Send
方法中通过_sendArgs.SetBuffer
绑定数据、_sendArgs.UserToken
关联内存缓冲,大幅降低 GC 压力;ReadOnlyMemoryPacketParser _packetParser
:TCP 消息专用编解码器,通过_packetParser = PacketParserFactory.CreateServerReadOnlyMemoryPacket(network)
从框架工厂获取,确保 TCP 消息 "包头 + 包体" 的编解码格式与其他协议一致,后续Send
的Pack
、ReceiveData
的UnPack
均依赖此组件;Queue<MemoryStreamBuffer> _sendBuffers
作为 TCP 消息有序发送的 "缓冲队列",在多线程并发调用Send
时,负责先将编码后的MemoryStreamBuffer
入队,再通过私有Send
方法串行消费,避免 TCP 字节流乱序;CancellationTokenSource _cancellationTokenSource
,管理 "接收 - 解析" 异步协程生命周期的核心组件,构造函数中启动的ReceiveSocketAsync
、ReadPipeDataAsync
均通过_cancellationTokenSource.Token
绑定取消信号,连接销毁时通过Cancel()
一键终止所有异步流程,避免无效资源占用。
Pipe为核心的 "Socket 接收→缓冲→解析" 流程 :通过ReceiveSocketAsync
(Socket 读数据)、ReadPipeDataAsync
(Pipe 读缓冲)、TryReadMessage
(拆内存块)、ReceiveData
(协议解析)四级协作,解决 TCP 粘包 / 拆包问题:
- ReceiveSocketAsync的Socket 数据写入 Pipe :异步循环从
_socket
读数据,写入Pipe
写端:通过_pipe.Writer.GetMemory(8192)
申请 8KB 内存块(平衡效率与内存占用),await _socket.ReceiveAsync
异步接收,当无数据时暂停线程,不阻塞其他连接;接收长度为 0(客户端主动断开)或捕获SocketException
(网络断连)时,直接调用Dispose
清理;接收成功后通过_pipe.Writer.Advance(count)
标记有效数据长度,await _pipe.Writer.FlushAsync()
确认写入 Pipe。 - ReadPipeDataAsync的Pipe 缓冲读取与拆分调度 :从
Pipe
读端循环取数据:await pipeReader.ReadAsync
获取ReadResult
(含buffer
缓冲数据、IsCompleted
状态),通过while (TryReadMessage(ref buffer, out var message))
循环拆分内存块,每拆分一块调用ReceiveData
解析,并通过consumed = buffer.Start
标记已处理位置;最终通过pipeReader.AdvanceTo(consumed, examined)
通知 Pipe 清理已处理数据,避免内存堆积。 - TryReadMessage的物理层面拆分内存块 :仅做 "内存段拆分",不涉协议逻辑:从
buffer
(ReadOnlySequence<byte>
多段内存集合)中取buffer.First
(第一块内存)作为message
,再通过buffer = buffer.Slice(message.Length)
裁剪缓冲(如buffer
为[20B,30B]
,第一次拆分后buffer
变为[30B]
),确保后续解析无重复。 - ReceiveData的协议解析与业务转发 :调用
_packetParser.UnPack(ref buffer, out var packInfo)
按协议解析:若buffer
数据不足(拆包场景),UnPack
返回false
,等待下次 Pipe 写入新数据;若数据足够(粘包场景),循环解析出多个packInfo
,通过Session.Receive
转发;捕获ScanException
(数据格式错误)、Exception
(未知异常)时,记录客户端地址并调用Dispose
,防止非法连接占用资源。
TCPClientNetwork
:TCP 客户端的 "单连接全流程交互载体"
作为AClientNetwork
的密封(sealed)TCP 协议特化实现,专注客户端 "主动连接 TCP 服务器" 场景,复用服务器端核心设计(Pipe 流处理、_packetParser
粘包拆包、_sendBuffers
队列与_isSending
保障发送有序、SocketAsyncEventArgs
优化异步性能);主动连接流程与超时管控(_connectTimeoutId
)、连接状态回调(_onConnectComplete
等)、兼容 Unity 2021 的ReceiveFromAsync
适配,是 TCP 客户端 "连接发起→交互→断开清理" 的专属载体。其源码位于Runtime/Core/Network/Protocol/TCP/TCPClientNetwork.cs
:
ini
#if !FANTASY_WEBGL
using System;
using System.Buffers;
using System.Collections.Generic;
using System.IO;
using System.IO.Pipelines;
using System.Net;
using System.Net.Sockets;
using System.Runtime.InteropServices;
using System.Threading;
using Fantasy.Async;
using Fantasy.Helper;
using Fantasy.Network.Interface;
using Fantasy.PacketParser;
using Fantasy.Serialize;
// ReSharper disable ConditionIsAlwaysTrueOrFalseAccordingToNullableAPIContract
#pragma warning disable CS8602 // Dereference of a possibly null reference.
#pragma warning disable CS8625 // Cannot convert null literal to non-nullable reference type.
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
#pragma warning disable CS8618 // Non-nullable field must contain a non-null value when exiting constructor. Consider declaring as nullable.
#pragma warning disable CS8604 // Possible null reference argument.
#pragma warning disable CS8600 // Converting null literal or possible null value to non-nullable type.
#pragma warning disable CS8622 // Nullability of reference types in type of parameter doesn't match the target delegate (possibly because of nullability attributes).
namespace Fantasy.Network.TCP
{
public sealed class TCPClientNetwork : AClientNetwork
{
private bool _isSending;
private bool _isInnerDispose;
private long _connectTimeoutId;
private Socket _socket;
private IPEndPoint _remoteEndPoint;
private SocketAsyncEventArgs _sendArgs;
private ReadOnlyMemoryPacketParser _packetParser;
private readonly Pipe _pipe = new Pipe();
private readonly Queue<MemoryStreamBuffer> _sendBuffers = new Queue<MemoryStreamBuffer>();
private readonly CancellationTokenSource _cancellationTokenSource = new CancellationTokenSource();
private Action _onConnectFail;
private Action _onConnectComplete;
private Action _onConnectDisconnect;
public uint ChannelId { get; private set; }
public void Initialize(NetworkTarget networkTarget)
{
base.Initialize(NetworkType.Client, NetworkProtocolType.TCP, networkTarget);
}
public override void Dispose()
{
if (IsDisposed || _isInnerDispose)
{
return;
}
try
{
_isSending = false;
_isInnerDispose = true;
ClearConnectTimeout();
if (!_cancellationTokenSource.IsCancellationRequested)
{
try
{
_cancellationTokenSource.Cancel();
}
catch (OperationCanceledException)
{
// 通常情况下,此处的异常可以忽略
}
}
_onConnectDisconnect?.Invoke();
if (_socket.Connected)
{
_socket.Close();
_socket = null;
}
_sendBuffers.Clear();
_packetParser?.Dispose();
ChannelId = 0;
_sendArgs = null;
}
catch (Exception e)
{
Log.Error(e);
}
finally
{
base.Dispose();
}
}
/// <summary>
/// 连接到远程服务器。
/// </summary>
/// <param name="remoteAddress">远程服务器的终端点。</param>
/// <param name="onConnectComplete">连接成功时的回调。</param>
/// <param name="onConnectFail">连接失败时的回调。</param>
/// <param name="onConnectDisconnect">连接断开时的回调。</param>
/// <param name="isHttps"></param>
/// <param name="connectTimeout">连接超时时间,单位:毫秒。</param>
/// <returns>连接的会话。</returns>
public override Session Connect(string remoteAddress, Action onConnectComplete, Action onConnectFail, Action onConnectDisconnect, bool isHttps, int connectTimeout = 5000)
{
// 如果已经初始化过一次,抛出异常,要求重新实例化
if (IsInit)
{
throw new NotSupportedException("TCPClientNetwork Has already been initialized. If you want to call Connect again, please re instantiate it.");
}
IsInit = true;
_isSending = false;
_onConnectFail = onConnectFail;
_onConnectComplete = onConnectComplete;
_onConnectDisconnect = onConnectDisconnect;
// 设置连接超时定时器
_connectTimeoutId = Scene.TimerComponent.Net.OnceTimer(connectTimeout, () =>
{
_onConnectFail?.Invoke();
Dispose();
});
_packetParser = PacketParserFactory.CreateClientReadOnlyMemoryPacket(this);
_remoteEndPoint = NetworkHelper.GetIPEndPoint(remoteAddress);
_socket = new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp);
_socket.NoDelay = true;
_socket.SetSocketBufferToOsLimit();
_sendArgs = new SocketAsyncEventArgs();
_sendArgs.Completed += OnSendCompleted;
var outArgs = new SocketAsyncEventArgs
{
RemoteEndPoint = _remoteEndPoint
};
outArgs.Completed += OnConnectSocketCompleted;
if (!_socket.ConnectAsync(outArgs))
{
OnReceiveSocketComplete();
}
Session = Session.Create(this, _remoteEndPoint);
return Session;
}
private void OnConnectSocketCompleted(object sender, SocketAsyncEventArgs asyncEventArgs)
{
if (_cancellationTokenSource.IsCancellationRequested)
{
return;
}
if (asyncEventArgs.LastOperation == SocketAsyncOperation.Connect)
{
if (asyncEventArgs.SocketError == SocketError.Success)
{
Scene.ThreadSynchronizationContext.Post(OnReceiveSocketComplete);
}
else
{
Scene.ThreadSynchronizationContext.Post(() =>
{
_onConnectFail?.Invoke();
Dispose();
});
}
}
}
private void OnReceiveSocketComplete()
{
ClearConnectTimeout();
_onConnectComplete?.Invoke();
ReadPipeDataAsync().Coroutine();
ReceiveSocketAsync().Coroutine();
}
#region ReceiveSocket
private async FTask ReceiveSocketAsync()
{
while (!_cancellationTokenSource.IsCancellationRequested)
{
try
{
var memory = _pipe.Writer.GetMemory(8192);
#if UNITY_2021
// Unity2021.3.14f有个恶心的问题,使用ReceiveAsync会导致memory不能正确写入
// 所有只能使用ReceiveFromAsync来接收消息,但ReceiveFromAsync只有一个接受ArraySegment的接口。
MemoryMarshal.TryGetArray(memory, out ArraySegment<byte> arraySegment);
var result = await _socket.ReceiveFromAsync(arraySegment, SocketFlags.None, _remoteEndPoint);
_pipe.Writer.Advance(result.ReceivedBytes);
#else
var count = await _socket.ReceiveAsync(memory, SocketFlags.None, _cancellationTokenSource.Token);
_pipe.Writer.Advance(count);
#endif
await _pipe.Writer.FlushAsync();
}
catch (SocketException)
{
Dispose();
break;
}
catch (OperationCanceledException)
{
break;
}
catch (ObjectDisposedException)
{
Dispose();
break;
}
catch (Exception ex)
{
Log.Error($"Unexpected exception: {ex.Message}");
}
}
await _pipe.Writer.CompleteAsync();
}
#endregion
#region ReceivePipeData
private async FTask ReadPipeDataAsync()
{
var pipeReader = _pipe.Reader;
while (!_cancellationTokenSource.IsCancellationRequested)
{
ReadResult result = default;
try
{
result = await pipeReader.ReadAsync(_cancellationTokenSource.Token);
}
catch (OperationCanceledException)
{
// 出现这个异常表示取消了_cancellationTokenSource。一般Channel断开会取消。
break;
}
var buffer = result.Buffer;
var consumed = buffer.Start;
var examined = buffer.End;
while (TryReadMessage(ref buffer, out var message))
{
ReceiveData(ref message);
consumed = buffer.Start;
}
if (result.IsCompleted)
{
break;
}
pipeReader.AdvanceTo(consumed, examined);
}
await pipeReader.CompleteAsync();
}
private bool TryReadMessage(ref ReadOnlySequence<byte> buffer, out ReadOnlyMemory<byte> message)
{
if (buffer.Length == 0)
{
message = default;
return false;
}
message = buffer.First;
if (message.Length == 0)
{
message = default;
return false;
}
buffer = buffer.Slice(message.Length);
return true;
}
private void ReceiveData(ref ReadOnlyMemory<byte> buffer)
{
try
{
while (_packetParser.UnPack(ref buffer, out var packInfo))
{
if (_cancellationTokenSource.IsCancellationRequested)
{
return;
}
Session.Receive(packInfo);
}
}
catch (ScanException e)
{
Log.Warning(e.Message);
Dispose();
}
catch (Exception e)
{
Log.Error(e);
Dispose();
}
}
#endregion
#region Send
public override void Send(uint rpcId, long routeId, MemoryStreamBuffer memoryStream, IMessage message)
{
_sendBuffers.Enqueue(_packetParser.Pack(ref rpcId, ref routeId, memoryStream, message));
if (!_isSending)
{
Send();
}
}
private void Send()
{
if (_isSending || IsDisposed)
{
return;
}
_isSending = true;
while (_sendBuffers.Count > 0)
{
var memoryStreamBuffer = _sendBuffers.Dequeue();
_sendArgs.UserToken = memoryStreamBuffer;
_sendArgs.SetBuffer(new ArraySegment<byte>(memoryStreamBuffer.GetBuffer(), 0, (int)memoryStreamBuffer.Position));
try
{
if (_socket.SendAsync(_sendArgs))
{
break;
}
ReturnMemoryStream(memoryStreamBuffer);
}
catch
{
_isSending = false;
return;
}
}
_isSending = false;
}
private void ReturnMemoryStream(MemoryStreamBuffer memoryStream)
{
if (memoryStream.MemoryStreamBufferSource == MemoryStreamBufferSource.Pack)
{
MemoryStreamBufferPool.ReturnMemoryStream(memoryStream);
}
}
private void OnSendCompleted(object sender, SocketAsyncEventArgs asyncEventArgs)
{
if (asyncEventArgs.SocketError != SocketError.Success || asyncEventArgs.BytesTransferred == 0)
{
_isSending = false;
return;
}
var memoryStreamBuffer = (MemoryStreamBuffer)asyncEventArgs.UserToken;
Scene.ThreadSynchronizationContext.Post(() =>
{
ReturnMemoryStream(memoryStreamBuffer);
if (_sendBuffers.Count > 0)
{
Send();
}
else
{
_isSending = false;
}
});
}
#endregion
public override void RemoveChannel(uint channelId)
{
Dispose();
}
private void ClearConnectTimeout()
{
if (_connectTimeoutId == 0)
{
return;
}
Scene?.TimerComponent?.Net?.Remove(ref _connectTimeoutId);
}
}
}
#endif
类的核心价值:TCP 客户端主动连接的全流程交互中枢
连接超时与状态回调组件:管控连接超时,通过回调解耦连接状态与上层业务:
_connectTimeoutId
:连接超时定时器唯一标识,在Connect
方法中通过Scene.TimerComponent.Net.OnceTimer
注册,默认 5000ms 超时任务,超时未成功则触发_onConnectFail
并销毁连接;配套ClearConnectTimeout
方法,在连接成功或销毁时取消超时任务,防止回调误执行。Action
回调(_onConnectComplete
/_onConnectFail
/_onConnectDisconnect
):由上层业务传入,分别对应 "连接成功""连接失败""连接断开" 状态,如成功后触发界面初始化、失败后弹出网络提示,实现业务与底层连接状态的解耦,服务器通过全局字典管理状态,故无需此类客户端级回调。
主动创建的 TCP 连接与标识:客户端主动创建并配置 Socket,以 ChannelId 标识连接身份:
_socket
:客户端需显式创建new Socket(AddressFamily.InterNetwork, SocketType.Stream, ProtocolType.Tcp)
,并强制配置_socket.NoDelay = true
(禁用 Nagle 算法,降低实时通信延迟)、_socket.SetSocketBufferToOsLimit()
(缓冲区设为系统上限,适配大消息),区别于服务器从AcceptAsync
获取的被动连接 Socket。ChannelId
:初始值为 0(uint
默认值),需等待服务器通过 "连接确认" 等业务消息同步赋值,销毁时重置为 0 标记失效,服务器则在连接接入时主动生成全局唯一ChannelId
。
主动Connect的连接流程:标准化主动连接全流程,从校验到结果处理保障可靠性:
- 初始化校验:
Connect
方法先判断IsInit
,若已初始化则抛异常,确保单个实例对应一个连接,避免复用冲突; - 组件与超时初始化:解析服务器地址
_remoteEndPoint
、创建_packetParser
、绑定_sendArgs
回调,同时注册超时任务; - 发起连接:通过
_socket.ConnectAsync
发起异步连接,同步完成则直接执行OnReceiveSocketComplete
,异步完成则等待OnConnectSocketCompleted
回调; - 结果处理:连接成功则取消超时、触发
_onConnectComplete
、启动收发协程;失败则触发_onConnectFail
并销毁连接,全程保障主动连接的可靠性。
Unity 适配的数据接收流程 :在ReceiveSocketAsync
中通过#if UNITY_2021
条件编译,针对 Unity 2021 版本ReceiveAsync
内存写入 bug,改用ReceiveFromAsync
接收数据,并通过MemoryMarshal.TryGetArray
将Memory
转为ArraySegment<byte>
适配接口;非 2021 版本则使用常规ReceiveAsync
,服务器因非 Unity 环境无需此适配。
销毁的特有清理流程:聚焦客户端场景的资源释放与状态同步:
- 超时任务取消:调用
ClearConnectTimeout
移除未触发的超时定时器,避免销毁后超时回调误执行; - 断开通知:触发
_onConnectDisconnect
回调,告知上层业务连接已断开(如启动重连逻辑),服务器无此客户端级断开通知; - 标识重置:将
ChannelId
重置为 0,标记服务器分配的标识失效,完成客户端连接的彻底清理。
客户端特有环境适配 :针对 TCP 连接与数据传输的核心回调场景,(如连接成功、连接失败、发送完成)均通过Scene.ThreadSynchronizationContext.Post
调度到主线程,因 Unity UI 操作、资源加载等业务逻辑需在主线程执行,避免跨线程异常,对比服务器通常无 UI 交互,业务逻辑可在 IO 线程或自定义线程池执行,无需强制主线程调度。
NetworkProtocolFactory:多协议实例化的 "统一入口"
NetworkProtocolFactory
作为工厂类,根据协议类型(TCP/KCP)和网络角色(客户端 / 服务器)创建对应的网络实例,屏蔽不同协议的实例化细节,为上层提供统一的创建接口。核心源码位于Runtime/Core/Network/Protocol/NetworkProtocolFactory.cs
:
ini
using System;
using System.Net;
using Fantasy.Entitas;
using Fantasy.Helper;
using Fantasy.Network.Interface;
#if !FANTASY_WEBGL
using Fantasy.Network.TCP;
using Fantasy.Network.KCP;
#endif
#if FANTASY_NET
using Fantasy.Network.HTTP;
#endif
using Fantasy.Network.WebSocket;
#pragma warning disable CS1591 // Missing XML comment for publicly visible type or member
namespace Fantasy.Network
{
internal static class NetworkProtocolFactory
{
#if FANTASY_NET
public static ANetwork CreateServer(Scene scene, NetworkProtocolType protocolType, NetworkTarget networkTarget, string bindIp, int port)
{
switch (protocolType)
{
case NetworkProtocolType.TCP:
{
var network = Entity.Create<TCPServerNetwork>(scene, false, false);
var address = NetworkHelper.ToIPEndPoint(bindIp, port);
network.Initialize(networkTarget, address);
return network;
}
case NetworkProtocolType.KCP:
{
var network = Entity.Create<KCPServerNetwork>(scene, false, true);
var address = NetworkHelper.ToIPEndPoint(bindIp, port);
network.Initialize(networkTarget, address);
return network;
}
case NetworkProtocolType.WebSocket:
{
var network = Entity.Create<WebSocketServerNetwork>(scene, false, true);
network.Initialize(networkTarget, bindIp, port);
return network;
}
case NetworkProtocolType.HTTP:
{
var network = Entity.Create<HTTPServerNetwork>(scene, false, true);
network.Initialize(networkTarget, bindIp, port);
return network;
}
default:
{
throw new NotSupportedException($"Unsupported NetworkProtocolType:{protocolType}");
}
}
}
#endif
public static AClientNetwork CreateClient(Scene scene, NetworkProtocolType protocolType, NetworkTarget networkTarget)
{
#if !FANTASY_WEBGL
switch (protocolType)
{
case NetworkProtocolType.TCP:
{
var network = Entity.Create<TCPClientNetwork>(scene, false, false);
network.Initialize(networkTarget);
return network;
}
case NetworkProtocolType.KCP:
{
var network = Entity.Create<KCPClientNetwork>(scene, false, true);
network.Initialize(networkTarget);
return network;
}
case NetworkProtocolType.WebSocket:
{
var network = Entity.Create<WebSocketClientNetwork>(scene, false, true);
network.Initialize(networkTarget);
return network;
}
default:
{
throw new NotSupportedException($"Unsupported NetworkProtocolType:{protocolType}");
}
}
#else
// Webgl平台只能用这个协议。
var network = Entity.Create<WebSocketClientNetwork>(scene, false, true);
network.Initialize(networkTarget);
return network;
#endif
}
}
}
类的核心价值:多协议的无缝切换与扩展
实例化逻辑的封装 :隐藏不同协议的创建细节(如 TCP 需绑定端口、KCP 需初始化重传参数),上层只需指定NetworkProtocolType
即可获取对应实例,无需关心具体实现类。
协议扩展的便利性 :新增协议(如 WebSocket)时,只需实现ANetwork
/AClientNetwork
接口,并在工厂类中添加对应case
分支,无需修改上层业务代码,符合 "开闭原则"。
跨环境适配 :通过#if FANTASY_NET
/#if !FANTASY_WEBGL
等条件编译,适配服务器 / 客户端、WebGL / 非 WebGL 等不同环境,确保协议实例化逻辑与运行环境匹配。 总结:分层架构下的 TCP 可靠传输与全生命周期管控
总结:分层架构下的 TCP 可靠传输与连接管控
Fantasy
框架的 TCP 通信机制以 "抽象统一、实现差异" 为核心设计理念,通过 "接口 - 抽象基类 - 具体实现" 的分层架构实现业务与协议解耦:INetworkChannel
等抽象层定义连接交互、消息发送等统一标准,TCPServerNetwork
/TCPClientNetwork
等实现层则落地 TCP 特化逻辑,依托 Pipe 流处理解决粘包拆包问题,通过_sendBuffers
队列与状态标记保障传输有序性;连接全生命周期形成 "创建 - 维护 - 销毁" 闭环,服务器端异步接受并全局管控连接,客户端主动发起连接并支持超时与状态回调;同时借助NetworkProtocolFactory
实现多协议无缝切换,适配多环境,最终为分布式系统提供高效、可靠、易扩展的通信基础设施,让开发者聚焦业务逻辑。