【Linux C/C++开发】第26章:系统级综合项目理论

第26章:系统级综合项目理论

26.1 分布式系统架构理论

26.1.1 分布式系统基础理论

分布式系统是由多个独立计算节点通过网络连接而成的系统,这些节点协同工作以实现共同的目标。在C++系统级编程中,理解分布式系统的理论基础对于构建高性能、可扩展的应用程序至关重要。

CAP理论

CAP理论(Consistency, Availability, Partition tolerance)是分布式系统的核心理论框架:

一致性(Consistency) :所有节点在同一时间具有相同的数据视图
可用性(Availability) :系统保持可用状态,每个请求都能收到响应
分区容错性(Partition tolerance):系统在网络分区的情况下仍能继续运行

数学表达:

复制代码
分布式系统最多只能同时满足CAP中的两个特性
即:¬(Consistency ∧ Availability ∧ Partition tolerance)
BASE理论

BASE理论是对CAP理论的补充,强调基本可用、软状态和最终一致性:

基本可用(Basically Available) :系统在出现故障时允许损失部分可用性
软状态(Soft state) :允许系统数据存在中间状态
最终一致性(Eventually consistent):系统保证最终数据会达到一致状态

26.1.2 分布式一致性算法

Paxos算法

Paxos是解决分布式一致性问题的经典算法,其数学基础建立在多数派投票机制上:

cpp 复制代码
// Paxos算法核心实现
template<typename T>
class PaxosNode {
private:
    struct Proposal {
        uint64_t proposal_number;
        T value;
        std::string proposer_id;
    };
    
    struct Promise {
        uint64_t proposal_number;
        std::optional<Proposal> accepted_proposal;
    };
    
    std::atomic<uint64_t> highest_proposal_number_{0};
    std::optional<Proposal> accepted_proposal_;
    std::set<std::string> quorum_nodes_;
    
public:
    // Phase 1: Prepare
    std::optional<Promise> prepare(uint64_t proposal_number) {
        std::lock_guard<std::mutex> lock(mutex_);
        
        if (proposal_number > highest_proposal_number_) {
            highest_proposal_number_ = proposal_number;
            return Promise{proposal_number, accepted_proposal_};
        }
        return std::nullopt;
    }
    
    // Phase 2: Accept
    bool accept(uint64_t proposal_number, const T& value) {
        std::lock_guard<std::mutex> lock(mutex_);
        
        if (proposal_number >= highest_proposal_number_) {
            highest_proposal_number_ = proposal_number;
            accepted_proposal_ = Proposal{proposal_number, value, node_id_};
            return true;
        }
        return false;
    }
    
    // 多数派验证
    bool has_majority(const std::set<std::string>& responses) {
        return responses.size() > (quorum_nodes_.size() + 1) / 2;
    }
};
Raft算法

Raft算法通过选举领导者来简化分布式一致性问题:

cpp 复制代码
// Raft节点状态枚举
enum class NodeState {
    FOLLOWER,
    CANDIDATE,
    LEADER
};

// Raft日志条目
template<typename T>
struct LogEntry {
    uint64_t term;
    uint64_t index;
    T command;
    std::chrono::system_clock::time_point timestamp;
};

template<typename T>
class RaftNode {
private:
    std::atomic<NodeState> state_{NodeState::FOLLOWER};
    std::atomic<uint64_t> current_term_{0};
    std::atomic<uint64_t> voted_for_{0};
    std::atomic<std::string> leader_id_{""};
    
    std::vector<LogEntry<T>> log_;
    std::atomic<uint64_t> commit_index_{0};
    std::atomic<uint64_t> last_applied_{0};
    
    // 选举超时机制
    std::chrono::milliseconds election_timeout_;
    std::chrono::system_clock::time_point last_heartbeat_;
    
public:
    // 心跳检测
    void heartbeat_check() {
        auto now = std::chrono::system_clock::now();
        auto elapsed = std::chrono::duration_cast<std::chrono::milliseconds>(
            now - last_heartbeat_);
        
        if (elapsed > election_timeout_ && state_ == NodeState::FOLLOWER) {
            convert_to_candidate();
        }
    }
    
    // 转换为候选者
    void convert_to_candidate() {
        state_ = NodeState::CANDIDATE;
        current_term_++;
        voted_for_ = node_id_;
        
        // 重置选举超时
        election_timeout_ = generate_random_timeout();
        last_heartbeat_ = std::chrono::system_clock::now();
        
        // 请求投票
        request_votes();
    }
    
    // 请求投票RPC
    struct VoteRequest {
        uint64_t term;
        uint64_t candidate_id;
        uint64_t last_log_index;
        uint64_t last_log_term;
    };
    
    struct VoteResponse {
        uint64_t term;
        bool vote_granted;
    };
    
private:
    std::chrono::milliseconds generate_random_timeout() {
        std::random_device rd;
        std::mt19937 gen(rd());
        std::uniform_int_distribution<> dis(150, 300);
        return std::chrono::milliseconds(dis(gen));
    }
};

26.1.3 分布式事务理论

两阶段提交(2PC)

两阶段提交协议确保分布式事务的原子性:

cpp 复制代码
// 两阶段提交协调者
template<typename T>
class TwoPhaseCommitCoordinator {
private:
    enum class TransactionState {
        INITIAL,
        PREPARING,
        PREPARED,
        COMMITTING,
        COMMITTED,
        ABORTING,
        ABORTED
    };
    
    struct Participant {
        std::string node_id;
        std::string address;
        std::atomic<bool> prepared{false};
        std::atomic<bool> committed{false};
    };
    
    std::atomic<TransactionState> state_{TransactionState::INITIAL};
    std::vector<Participant> participants_;
    T transaction_data_;
    
public:
    // 阶段1:准备阶段
    bool prepare_phase() {
        state_ = TransactionState::PREPARING;
        
        std::vector<std::future<bool>> futures;
        for (auto& participant : participants_) {
            futures.push_back(std::async(std::launch::async, 
                [this, &participant]() {
                    return send_prepare_request(participant);
                }));
        }
        
        // 等待所有参与者响应
        bool all_prepared = true;
        for (size_t i = 0; i < futures.size(); ++i) {
            if (!futures[i].get()) {
                all_prepared = false;
                break;
            }
            participants_[i].prepared = true;
        }
        
        if (all_prepared) {
            state_ = TransactionState::PREPARED;
            return true;
        } else {
            state_ = TransactionState::ABORTING;
            return false;
        }
    }
    
    // 阶段2:提交阶段
    bool commit_phase() {
        if (state_ != TransactionState::PREPARED) {
            return false;
        }
        
        state_ = TransactionState::COMMITTING;
        
        std::vector<std::future<bool>> futures;
        for (auto& participant : participants_) {
            futures.push_back(std::async(std::launch::async,
                [this, &participant]() {
                    return send_commit_request(participant);
                }));
        }
        
        // 等待所有参与者提交
        bool all_committed = true;
        for (size_t i = 0; i < futures.size(); ++i) {
            if (!futures[i].get()) {
                all_committed = false;
                // 处理提交失败的情况
                handle_commit_failure(participants_[i]);
            } else {
                participants_[i].committed = true;
            }
        }
        
        state_ = all_committed ? TransactionState::COMMITTED : 
                                TransactionState::ABORTED;
        return all_committed;
    }
    
private:
    bool send_prepare_request(Participant& participant) {
        // 实现准备请求的网络通信
        try {
            // 发送准备请求并等待响应
            auto response = network_client_.send_request(
                participant.address, 
                Protocol::PREPARE,
                transaction_data_);
            
            return response.status == Protocol::Status::PREPARED;
        } catch (const std::exception& e) {
            log_error("Prepare request failed for participant " + 
                     participant.node_id + ": " + e.what());
            return false;
        }
    }
    
    bool send_commit_request(Participant& participant) {
        // 实现提交请求的网络通信
        try {
            auto response = network_client_.send_request(
                participant.address,
                Protocol::COMMIT,
                transaction_data_);
            
            return response.status == Protocol::Status::COMMITTED;
        } catch (const std::exception& e) {
            log_error("Commit request failed for participant " + 
                     participant.node_id + ": " + e.what());
            return false;
        }
    }
};

26.2 高性能网络服务器架构

26.2.1 网络I/O模型理论

Reactor模式

Reactor模式是高性能网络服务器的核心设计模式:

cpp 复制代码
// Reactor模式核心实现
template<typename Handler>
class Reactor {
private:
    std::unique_ptr<EventDemultiplexer> demultiplexer_;
    std::map<int, Handler> handlers_;
    std::atomic<bool> running_{false};
    
    // 事件类型
    enum class EventType {
        READ = EPOLLIN,
        WRITE = EPOLLOUT,
        ERROR = EPOLLERR | EPOLLHUP
    };
    
public:
    // 注册事件处理器
    void register_handler(int fd, Handler handler, EventType event_type) {
        std::lock_guard<std::mutex> lock(handlers_mutex_);
        
        handlers_[fd] = std::move(handler);
        demultiplexer_->register_event(fd, static_cast<uint32_t>(event_type));
    }
    
    // 移除事件处理器
    void remove_handler(int fd) {
        std::lock_guard<std::mutex> lock(handlers_mutex_);
        
        handlers_.erase(fd);
        demultiplexer_->remove_event(fd);
    }
    
    // 事件循环
    void event_loop() {
        running_ = true;
        
        while (running_) {
            // 等待事件发生
            auto events = demultiplexer_->wait_for_events();
            
            // 分发事件到对应的处理器
            for (const auto& event : events) {
                dispatch_event(event);
            }
        }
    }
    
private:
    void dispatch_event(const Event& event) {
        std::lock_guard<std::mutex> lock(handlers_mutex_);
        
        auto it = handlers_.find(event.fd);
        if (it != handlers_.end()) {
            // 调用对应的事件处理器
            it->second(event);
        }
    }
};

// 事件多路复用器基类
class EventDemultiplexer {
public:
    virtual ~EventDemultiplexer() = default;
    
    virtual void register_event(int fd, uint32_t events) = 0;
    virtual void remove_event(int fd) = 0;
    virtual std::vector<Event> wait_for_events(int timeout_ms = -1) = 0;
};

// EPOLL实现
class EpollDemultiplexer : public EventDemultiplexer {
private:
    int epoll_fd_;
    std::vector<struct epoll_event> events_;
    static constexpr int MAX_EVENTS = 1024;
    
public:
    EpollDemultiplexer() : events_(MAX_EVENTS) {
        epoll_fd_ = epoll_create1(EPOLL_CLOEXEC);
        if (epoll_fd_ == -1) {
            throw std::runtime_error("Failed to create epoll fd");
        }
    }
    
    ~EpollDemultiplexer() override {
        if (epoll_fd_ != -1) {
            close(epoll_fd_);
        }
    }
    
    void register_event(int fd, uint32_t events) override {
        struct epoll_event ev;
        ev.events = events;
        ev.data.fd = fd;
        
        if (epoll_ctl(epoll_fd_, EPOLL_CTL_ADD, fd, &ev) == -1) {
            if (errno == EEXIST) {
                // 如果已存在,则修改为MOD
                if (epoll_ctl(epoll_fd_, EPOLL_CTL_MOD, fd, &ev) == -1) {
                    throw std::runtime_error("Failed to modify epoll event");
                }
            } else {
                throw std::runtime_error("Failed to add epoll event");
            }
        }
    }
    
    void remove_event(int fd) override {
        if (epoll_ctl(epoll_fd_, EPOLL_CTL_DEL, fd, nullptr) == -1) {
            throw std::runtime_error("Failed to remove epoll event");
        }
    }
    
    std::vector<Event> wait_for_events(int timeout_ms = -1) override {
        int num_events = epoll_wait(epoll_fd_, events_.data(), 
                                   events_.size(), timeout_ms);
        
        if (num_events == -1) {
            if (errno == EINTR) {
                return {}; // 被信号中断,返回空事件列表
            }
            throw std::runtime_error("epoll_wait failed");
        }
        
        std::vector<Event> result;
        result.reserve(num_events);
        
        for (int i = 0; i < num_events; ++i) {
            Event event;
            event.fd = events_[i].data.fd;
            event.events = events_[i].events;
            result.push_back(event);
        }
        
        return result;
    }
};
Proactor模式

Proactor模式通过异步I/O提供更高性能:

cpp 复制代码
// Proactor模式实现
template<typename T>
class Proactor {
private:
    std::unique_ptr<AsyncOperationProcessor> processor_;
    std::unique_ptr<CompletionHandlerFactory> handler_factory_;
    std::atomic<bool> running_{false};
    
public:
    // 异步读取操作
    void async_read(int fd, void* buffer, size_t size, 
                   std::function<void(const T&)> completion_handler) {
        
        auto operation = std::make_unique<AsyncReadOperation>(
            fd, buffer, size, std::move(completion_handler));
        
        processor_->submit_operation(std::move(operation));
    }
    
    // 异步写入操作
    void async_write(int fd, const void* data, size_t size,
                    std::function<void(const T&)> completion_handler) {
        
        auto operation = std::make_unique<AsyncWriteOperation>(
            fd, data, size, std::move(completion_handler));
        
        processor_->submit_operation(std::move(operation));
    }
    
    // 事件循环
    void event_loop() {
        running_ = true;
        
        while (running_) {
            // 获取完成的异步操作
            auto completed_operations = processor_->get_completed_operations();
            
            // 调用完成处理器
            for (auto& operation : completed_operations) {
                operation->complete();
            }
        }
    }
};

// 异步操作基类
template<typename T>
class AsyncOperation {
public:
    virtual ~AsyncOperation() = default;
    virtual void execute() = 0;
    virtual void complete() = 0;
    
protected:
    std::function<void(const T&)> completion_handler_;
};

// 异步读取操作
template<typename T>
class AsyncReadOperation : public AsyncOperation<T> {
private:
    int fd_;
    void* buffer_;
    size_t size_;
    T result_;
    
public:
    AsyncReadOperation(int fd, void* buffer, size_t size,
                      std::function<void(const T&)> handler)
        : fd_(fd), buffer_(buffer), size_(size) {
        this->completion_handler_ = std::move(handler);
    }
    
    void execute() override {
        // 提交异步读取请求到内核
        auto aio_cb = std::make_unique<aiocb>();
        aio_cb->aio_fildes = fd_;
        aio_cb->aio_buf = buffer_;
        aio_cb->aio_nbytes = size_;
        aio_cb->aio_sigevent.sigev_notify = SIGEV_THREAD;
        aio_cb->aio_sigevent.sigev_notify_function = &async_completion_handler;
        aio_cb->aio_sigevent.sigev_value.sival_ptr = this;
        
        if (aio_read(aio_cb.get()) == -1) {
            throw std::runtime_error("Failed to submit async read");
        }
        
        // 释放所有权,避免重复释放
        aio_cb.release();
    }
    
    void complete() override {
        // 读取操作已完成,调用完成处理器
        if (this->completion_handler_) {
            this->completion_handler_(result_);
        }
    }
    
private:
    static void async_completion_handler(sigval_t sigval) {
        auto* operation = static_cast<AsyncReadOperation*>(sigval.sival_ptr);
        
        // 获取异步操作结果
        auto* aio_cb = static_cast<aiocb*>(operation);
        ssize_t bytes_read = aio_return(aio_cb);
        
        if (bytes_read == -1) {
            operation->result_ = T{}; // 设置错误状态
        } else {
            operation->result_ = T{bytes_read, operation->buffer_};
        }
        
        // 将操作标记为完成
        AsyncOperationProcessor::instance().mark_completed(operation);
    }
};

26.2.2 线程池架构

工作窃取线程池

工作窃取算法提高线程池效率:

cpp 复制代码
// 工作窃取线程池
template<typename T>
class WorkStealingThreadPool {
private:
    struct Task {
        std::function<T()> func;
        std::promise<T> promise;
    };
    
    struct WorkQueue {
        std::deque<Task> tasks;
        std::mutex mutex;
        
        // 尝试从队列前端获取任务(本地线程)
        std::optional<Task> try_pop_front() {
            std::lock_guard<std::mutex> lock(mutex);
            if (tasks.empty()) {
                return std::nullopt;
            }
            
            Task task = std::move(tasks.front());
            tasks.pop_front();
            return task;
        }
        
        // 尝试从队列后端获取任务(窃取)
        std::optional<Task> try_pop_back() {
            std::lock_guard<std::mutex> lock(mutex);
            if (tasks.empty()) {
                return std::nullopt;
            }
            
            Task task = std::move(tasks.back());
            tasks.pop_back();
            return task;
        }
        
        // 添加任务到队列前端
        void push_front(Task task) {
            std::lock_guard<std::mutex> lock(mutex);
            tasks.push_front(std::move(task));
        }
        
        // 添加任务到队列后端
        void push_back(Task task) {
            std::lock_guard<std::mutex> lock(mutex);
            tasks.push_back(std::move(task));
        }
    };
    
    std::vector<std::thread> threads_;
    std::vector<std::unique_ptr<WorkQueue>> work_queues_;
    std::atomic<bool> shutdown_{false};
    thread_local static WorkQueue* local_work_queue_;
    thread_local static size_t thread_index_;
    
public:
    explicit WorkStealingThreadPool(size_t num_threads = 
                                   std::thread::hardware_concurrency()) {
        
        work_queues_.resize(num_threads);
        for (size_t i = 0; i < num_threads; ++i) {
            work_queues_[i] = std::make_unique<WorkQueue>();
        }
        
        threads_.reserve(num_threads);
        for (size_t i = 0; i < num_threads; ++i) {
            threads_.emplace_back([this, i]() {
                thread_index_ = i;
                local_work_queue_ = work_queues_[i].get();
                worker_thread();
            });
        }
    }
    
    ~WorkStealingThreadPool() {
        shutdown_ = true;
        for (auto& thread : threads_) {
            thread.join();
        }
    }
    
    // 提交任务
    template<typename F, typename... Args>
    auto submit(F&& f, Args&&... args) -> std::future<decltype(f(args...))> {
        using ReturnType = decltype(f(args...));
        
        auto task = std::make_shared<std::packaged_task<ReturnType()>>(
            std::bind(std::forward<F>(f), std::forward<Args>(args)...));
        
        std::future<ReturnType> future = task->get_future();
        
        if (local_work_queue_) {
            // 提交到本地线程队列
            typename WorkQueue::Task wrapped_task;
            wrapped_task.func = [task]() { return (*task)(); };
            wrapped_task.promise = std::promise<ReturnType>();
            
            local_work_queue_->push_front(std::move(wrapped_task));
        } else {
            // 提交到随机队列
            auto random_queue = work_queues_[
                std::rand() % work_queues_.size()].get();
            
            typename WorkQueue::Task wrapped_task;
            wrapped_task.func = [task]() { return (*task)(); };
            wrapped_task.promise = std::promise<ReturnType>();
            
            random_queue->push_back(std::move(wrapped_task));
        }
        
        return future;
    }
    
private:
    void worker_thread() {
        while (!shutdown_) {
            // 尝试从本地队列获取任务
            if (auto task = local_work_queue_->try_pop_front()) {
                execute_task(*task);
                continue;
            }
            
            // 尝试从其他队列窃取任务
            if (auto task = steal_task()) {
                execute_task(*task);
                continue;
            }
            
            // 没有任务,短暂休眠
            std::this_thread::yield();
        }
    }
    
    std::optional<typename WorkQueue::Task> steal_task() {
        // 随机选择要窃取的队列
        size_t num_queues = work_queues_.size();
        size_t start_index = (thread_index_ + 1) % num_queues;
        
        for (size_t i = 0; i < num_queues; ++i) {
            size_t queue_index = (start_index + i) % num_queues;
            
            if (queue_index == thread_index_) {
                continue; // 跳过本地队列
            }
            
            // 尝试从其他队列窃取任务
            if (auto task = work_queues_[queue_index]->try_pop_back()) {
                return task;
            }
        }
        
        return std::nullopt;
    }
    
    void execute_task(typename WorkQueue::Task& task) {
        try {
            auto result = task.func();
            task.promise.set_value(result);
        } catch (...) {
            task.promise.set_exception(std::current_exception());
        }
    }
};
自适应线程池

自适应线程池根据负载动态调整线程数量:

cpp 复制代码
// 自适应线程池
template<typename T>
class AdaptiveThreadPool {
private:
    struct Task {
        std::function<T()> func;
        std::promise<T> promise;
        std::chrono::system_clock::time_point submit_time;
    };
    
    std::vector<std::thread> threads_;
    std::queue<Task> task_queue_;
    std::mutex queue_mutex_;
    std::condition_variable queue_cv_;
    std::condition_variable finished_cv_;
    
    std::atomic<bool> shutdown_{false};
    std::atomic<size_t> active_threads_{0};
    std::atomic<size_t> total_tasks_{0};
    std::atomic<size_t> completed_tasks_{0};
    
    // 性能指标
    struct PerformanceMetrics {
        double average_task_latency;
        double throughput; // tasks/second
        double cpu_utilization;
        size_t queue_length;
        std::chrono::system_clock::time_point timestamp;
    };
    
    std::deque<PerformanceMetrics> metrics_history_;
    std::mutex metrics_mutex_;
    
    // 配置参数
    struct Config {
        size_t min_threads = 2;
        size_t max_threads = std::thread::hardware_concurrency() * 2;
        size_t target_queue_length = 10;
        double target_latency_ms = 100.0;
        double scale_up_threshold = 1.5;  // 延迟超过目标1.5倍时扩容
        double scale_down_threshold = 0.5; // 延迟低于目标0.5倍时缩容
        std::chrono::seconds adjustment_interval{30};
    };
    
    Config config_;
    std::thread adjustment_thread_;
    
public:
    explicit AdaptiveThreadPool(const Config& config = Config{}) 
        : config_(config) {
        
        // 启动最小数量的线程
        for (size_t i = 0; i < config_.min_threads; ++i) {
            add_thread();
        }
        
        // 启动调整线程
        adjustment_thread_ = std::thread([this]() {
            adjustment_loop();
        });
    }
    
    ~AdaptiveThreadPool() {
        shutdown_ = true;
        queue_cv_.notify_all();
        finished_cv_.notify_all();
        
        for (auto& thread : threads_) {
            thread.join();
        }
        
        if (adjustment_thread_.joinable()) {
            adjustment_thread_.join();
        }
    }
    
    template<typename F, typename... Args>
    auto submit(F&& f, Args&&... args) -> std::future<decltype(f(args...))> {
        using ReturnType = decltype(f(args...));
        
        auto task = std::make_shared<std::packaged_task<ReturnType()>>(
            std::bind(std::forward<F>(f), std::forward<Args>(args)...));
        
        std::future<ReturnType> future = task->get_future();
        
        {
            std::lock_guard<std::mutex> lock(queue_mutex_);
            
            Task wrapped_task;
            wrapped_task.func = [task]() { return (*task)(); };
            wrapped_task.promise = std::promise<ReturnType>();
            wrapped_task.submit_time = std::chrono::system_clock::now();
            
            task_queue_.push(std::move(wrapped_task));
            total_tasks_++;
        }
        
        queue_cv_.notify_one();
        
        // 记录性能指标
        record_submission();
        
        return future;
    }
    
private:
    void add_thread() {
        std::lock_guard<std::mutex> lock(queue_mutex_);
        
        threads_.emplace_back([this]() {
            worker_thread();
        });
    }
    
    void remove_thread() {
        std::lock_guard<std::mutex> lock(queue_mutex_);
        
        if (threads_.size() > config_.min_threads) {
            // 通过设置关闭标志来优雅地停止线程
            shutdown_ = true;
            queue_cv_.notify_all();
        }
    }
    
    void worker_thread() {
        active_threads_++;
        
        while (!shutdown_) {
            std::unique_lock<std::mutex> lock(queue_mutex_);
            
            // 等待任务
            queue_cv_.wait(lock, [this]() {
                return !task_queue_.empty() || shutdown_;
            });
            
            if (shutdown_ && task_queue_.empty()) {
                break;
            }
            
            if (!task_queue_.empty()) {
                Task task = std::move(task_queue_.front());
                task_queue_.pop();
                lock.unlock();
                
                // 执行任务
                auto start_time = std::chrono::system_clock::now();
                
                try {
                    auto result = task.func();
                    task.promise.set_value(result);
                } catch (...) {
                    task.promise.set_exception(std::current_exception());
                }
                
                auto end_time = std::chrono::system_clock::now();
                auto latency = std::chrono::duration_cast<std::chrono::milliseconds>(
                    end_time - task.submit_time);
                
                // 记录完成信息
                record_completion(latency);
            }
        }
        
        active_threads_--;
    }
    
    void adjustment_loop() {
        while (!shutdown_) {
            std::this_thread::sleep_for(config_.adjustment_interval);
            
            if (shutdown_) break;
            
            adjust_thread_count();
        }
    }
    
    void adjust_thread_count() {
        auto metrics = calculate_current_metrics();
        
        {
            std::lock_guard<std::mutex> lock(metrics_mutex_);
            metrics_history_.push_back(metrics);
            
            // 保持历史记录在合理范围内
            if (metrics_history_.size() > 100) {
                metrics_history_.pop_front();
            }
        }
        
        // 决策逻辑
        size_t current_threads = threads_.size();
        size_t target_threads = current_threads;
        
        if (metrics.average_task_latency > config_.target_latency_ms * 
            config_.scale_up_threshold) {
            // 延迟过高,需要扩容
            target_threads = std::min(current_threads * 2, config_.max_threads);
        } else if (metrics.average_task_latency < config_.target_latency_ms * 
                   config_.scale_down_threshold &&
                   metrics.queue_length < config_.target_queue_length) {
            // 延迟过低且队列较短,可以缩容
            target_threads = std::max(current_threads / 2, config_.min_threads);
        }
        
        // 调整线程数量
        if (target_threads > current_threads) {
            for (size_t i = current_threads; i < target_threads; ++i) {
                add_thread();
            }
        } else if (target_threads < current_threads) {
            // 通过自然消亡的方式减少线程
            // 这里可以设置一个标志,让多余的线程在完成任务后退出
        }
    }
    
    PerformanceMetrics calculate_current_metrics() {
        std::lock_guard<std::mutex> lock(queue_mutex_);
        
        PerformanceMetrics metrics;
        metrics.queue_length = task_queue_.size();
        metrics.timestamp = std::chrono::system_clock::now();
        
        // 计算平均延迟
        if (!metrics_history_.empty()) {
            double total_latency = 0.0;
            for (const auto& historical_metrics : metrics_history_) {
                total_latency += historical_metrics.average_task_latency;
            }
            metrics.average_task_latency = total_latency / metrics_history_.size();
        }
        
        // 计算吞吐量
        auto time_window = std::chrono::duration_cast<std::chrono::seconds>(
            metrics.timestamp - metrics_history_.front().timestamp).count();
        
        if (time_window > 0) {
            size_t tasks_in_window = completed_tasks_ - 
                (metrics_history_.empty() ? 0 : 
                 metrics_history_.size() * config_.target_queue_length);
            metrics.throughput = static_cast<double>(tasks_in_window) / time_window;
        }
        
        // CPU利用率估算
        metrics.cpu_utilization = static_cast<double>(active_threads_) / 
                                 std::thread::hardware_concurrency();
        
        return metrics;
    }
    
    void record_submission() {
        // 可以添加更复杂的记录逻辑
    }
    
    void record_completion(std::chrono::milliseconds latency) {
        completed_tasks_++;
        
        // 可以记录延迟分布等信息
    }
};

26.3 内存管理子系统

26.3.1 分层内存管理架构

全局内存管理器
cpp 复制代码
// 全局内存管理器
template<typename T>
class GlobalMemoryManager {
private:
    // 内存块结构
    struct MemoryBlock {
        void* address;
        size_t size;
        MemoryBlock* next;
        MemoryBlock* prev;
        std::atomic<bool> is_free{true};
        std::atomic<size_t> reference_count{0};
        std::chrono::system_clock::time_point allocation_time;
        std::string allocation_site; // 分配位置信息(用于调试)
    };
    
    // 内存池结构
    struct MemoryPool {
        void* base_address;
        size_t total_size;
        size_t used_size{0};
        MemoryBlock* free_list{nullptr};
        MemoryBlock* allocated_list{nullptr};
        std::mutex pool_mutex;
        std::atomic<size_t> allocation_count{0};
        std::atomic<size_t> deallocation_count{0};
    };
    
    std::vector<std::unique_ptr<MemoryPool>> pools_;
    std::map<size_t, size_t> size_class_map_; // 大小类别映射
    
    // 性能统计
    struct MemoryStatistics {
        std::atomic<size_t> total_allocated{0};
        std::atomic<size_t> total_deallocated{0};
        std::atomic<size_t> current_usage{0};
        std::atomic<size_t> peak_usage{0};
        std::atomic<size_t> allocation_count{0};
        std::atomic<size_t> deallocation_count{0};
        std::atomic<double> average_allocation_size{0.0};
        std::atomic<double> fragmentation_ratio{0.0};
    };
    
    MemoryStatistics stats_;
    
    // 配置参数
    struct Config {
        size_t pool_size = 1024 * 1024 * 64; // 64MB per pool
        size_t max_pools = 16;
        size_t small_object_threshold = 256;
        size_t large_object_threshold = 1024 * 1024; // 1MB
        bool enable_statistics = true;
        bool enable_debug_info = false;
        double fragmentation_threshold = 0.2; // 20%
    };
    
    Config config_;
    
public:
    explicit GlobalMemoryManager(const Config& config = Config{}) 
        : config_(config) {
        
        initialize_size_classes();
        add_memory_pool();
    }
    
    ~GlobalMemoryManager() {
        if (config_.enable_statistics) {
            print_statistics();
        }
        
        // 检查内存泄漏
        check_memory_leaks();
    }
    
    // 分配内存
    void* allocate(size_t size, const std::string& allocation_site = "") {
        if (size == 0) return nullptr;
        
        // 对齐到8字节边界
        size = align_size(size);
        
        // 查找合适的内存池
        auto* pool = find_suitable_pool(size);
        if (!pool) {
            pool = add_memory_pool();
        }
        
        // 在内存池中分配
        auto* block = allocate_from_pool(pool, size, allocation_site);
        if (!block) {
            // 当前池分配失败,尝试其他池
            for (auto& p : pools_) {
                if (p.get() != pool) {
                    block = allocate_from_pool(p.get(), size, allocation_site);
                    if (block) break;
                }
            }
            
            // 所有池都分配失败,添加新池
            if (!block && pools_.size() < config_.max_pools) {
                pool = add_memory_pool();
                block = allocate_from_pool(pool, size, allocation_site);
            }
        }
        
        if (block) {
            update_statistics_allocation(size);
            return block->address;
        }
        
        throw std::bad_alloc();
    }
    
    // 释放内存
    void deallocate(void* ptr) {
        if (!ptr) return;
        
        // 查找包含该指针的内存池
        for (auto& pool : pools_) {
            auto* block = find_block_in_pool(pool.get(), ptr);
            if (block) {
                deallocate_from_pool(pool.get(), block);
                update_statistics_deallocation(block->size);
                return;
            }
        }
        
        // 未找到对应的内存块,可能是无效指针
        throw std::invalid_argument("Invalid pointer for deallocation");
    }
    
    // 获取内存使用统计
    MemoryStatistics get_statistics() const {
        return stats_;
    }
    
    // 获取内存碎片率
    double get_fragmentation_ratio() const {
        return calculate_fragmentation();
    }
    
    // 执行内存整理
    void defragment() {
        for (auto& pool : pools_) {
            defragment_pool(pool.get());
        }
    }
    
private:
    void initialize_size_classes() {
        // 初始化大小类别映射
        size_t size = 8;
        size_t class_index = 0;
        
        while (size <= config_.large_object_threshold) {
            size_class_map_[size] = class_index++;
            size = std::ceil(size * 1.25); // 25%增长
        }
    }
    
    size_t align_size(size_t size) {
        return (size + 7) & ~7; // 8字节对齐
    }
    
    MemoryPool* find_suitable_pool(size_t size) {
        // 查找有足够空闲空间的内存池
        for (auto& pool : pools_) {
            std::lock_guard<std::mutex> lock(pool->pool_mutex);
            
            size_t free_space = pool->total_size - pool->used_size;
            if (free_space >= size) {
                return pool.get();
            }
        }
        return nullptr;
    }
    
    MemoryPool* add_memory_pool() {
        if (pools_.size() >= config_.max_pools) {
            return nullptr;
        }
        
        auto pool = std::make_unique<MemoryPool>();
        pool->total_size = config_.pool_size;
        pool->base_address = std::aligned_alloc(4096, config_.pool_size);
        
        if (!pool->base_address) {
            throw std::bad_alloc();
        }
        
        // 初始化整个内存池为一个大的空闲块
        auto* initial_block = new MemoryBlock{
            pool->base_address,
            config_.pool_size,
            nullptr,
            nullptr,
            true,
            0,
            std::chrono::system_clock::now(),
            ""
        };
        
        pool->free_list = initial_block;
        pools_.push_back(std::move(pool));
        
        return pools_.back().get();
    }
    
    MemoryBlock* allocate_from_pool(MemoryPool* pool, size_t size,
                                   const std::string& allocation_site) {
        std::lock_guard<std::mutex> lock(pool->pool_mutex);
        
        // 使用首次适应算法查找合适的空闲块
        MemoryBlock* prev = nullptr;
        MemoryBlock* current = pool->free_list;
        
        while (current) {
            if (current->size >= size) {
                // 找到合适的块
                if (current->size > size + sizeof(MemoryBlock) + 64) {
                    // 块太大,分割
                    split_block(pool, current, size);
                }
                
                // 从空闲列表移除
                if (prev) {
                    prev->next = current->next;
                } else {
                    pool->free_list = current->next;
                }
                
                // 添加到已分配列表
                current->next = pool->allocated_list;
                current->prev = nullptr;
                if (pool->allocated_list) {
                    pool->allocated_list->prev = current;
                }
                pool->allocated_list = current;
                
                // 更新块信息
                current->is_free = false;
                current->allocation_time = std::chrono::system_clock::now();
                current->allocation_site = allocation_site;
                
                pool->used_size += current->size;
                pool->allocation_count++;
                
                return current;
            }
            
            prev = current;
            current = current->next;
        }
        
        return nullptr; // 分配失败
    }
    
    void split_block(MemoryPool* pool, MemoryBlock* block, size_t requested_size) {
        size_t remaining_size = block->size - requested_size - sizeof(MemoryBlock);
        
        if (remaining_size >= 64) { // 确保剩余块足够大
            // 创建新的空闲块
            auto* new_block = new MemoryBlock{
                static_cast<char*>(block->address) + requested_size + sizeof(MemoryBlock),
                remaining_size,
                block->next,
                block,
                true,
                0,
                std::chrono::system_clock::now(),
                ""
            };
            
            // 更新原块大小
            block->size = requested_size;
            block->next = new_block;
        }
    }
    
    MemoryBlock* find_block_in_pool(MemoryPool* pool, void* ptr) {
        std::lock_guard<std::mutex> lock(pool->pool_mutex);
        
        MemoryBlock* current = pool->allocated_list;
        while (current) {
            if (current->address == ptr) {
                return current;
            }
            current = current->next;
        }
        
        return nullptr;
    }
    
    void deallocate_from_pool(MemoryPool* pool, MemoryBlock* block) {
        std::lock_guard<std::mutex> lock(pool->pool_mutex);
        
        // 从已分配列表移除
        if (block->prev) {
            block->prev->next = block->next;
        } else {
            pool->allocated_list = block->next;
        }
        
        if (block->next) {
            block->next->prev = block->prev;
        }
        
        // 标记为空闲
        block->is_free = true;
        block->reference_count = 0;
        
        // 合并相邻的空闲块
        coalesce_blocks(pool, block);
        
        pool->used_size -= block->size;
        pool->deallocation_count++;
    }
    
    void coalesce_blocks(MemoryPool* pool, MemoryBlock* block) {
        // 向前合并
        if (block->prev && block->prev->is_free) {
            auto* prev_block = block->prev;
            prev_block->size += block->size + sizeof(MemoryBlock);
            prev_block->next = block->next;
            
            if (block->next) {
                block->next->prev = prev_block;
            }
            
            // 释放当前块的元数据
            delete block;
            block = prev_block;
        }
        
        // 向后合并
        if (block->next && block->next->is_free) {
            auto* next_block = block->next;
            block->size += next_block->size + sizeof(MemoryBlock);
            block->next = next_block->next;
            
            if (next_block->next) {
                next_block->next->prev = block;
            }
            
            // 释放下一个块的元数据
            delete next_block;
        }
        
        // 将合并后的块添加到空闲列表
        block->next = pool->free_list;
        block->prev = nullptr;
        if (pool->free_list) {
            pool->free_list->prev = block;
        }
        pool->free_list = block;
    }
    
    void update_statistics_allocation(size_t size) {
        if (!config_.enable_statistics) return;
        
        stats_.total_allocated += size;
        stats_.current_usage += size;
        stats_.allocation_count++;
        
        // 更新峰值使用
        size_t current_usage = stats_.current_usage.load();
        size_t peak_usage = stats_.peak_usage.load();
        while (current_usage > peak_usage && 
               !stats_.peak_usage.compare_exchange_weak(peak_usage, current_usage)) {
            // 自旋直到成功更新峰值
        }
        
        // 更新平均分配大小
        double current_avg = stats_.average_allocation_size.load();
        size_t alloc_count = stats_.allocation_count.load();
        double new_avg = (current_avg * (alloc_count - 1) + size) / alloc_count;
        stats_.average_allocation_size = new_avg;
    }
    
    void update_statistics_deallocation(size_t size) {
        if (!config_.enable_statistics) return;
        
        stats_.total_deallocated += size;
        stats_.current_usage -= size;
        stats_.deallocation_count++;
    }
    
    double calculate_fragmentation() const {
        size_t total_free = 0;
        size_t largest_free = 0;
        
        for (const auto& pool : pools_) {
            std::lock_guard<std::mutex> lock(pool->pool_mutex);
            
            MemoryBlock* current = pool->free_list;
            while (current) {
                total_free += current->size;
                largest_free = std::max(largest_free, current->size);
                current = current->next;
            }
        }
        
        if (total_free == 0) return 0.0;
        
        // 碎片率 = 1 - (最大空闲块 / 总空闲空间)
        return 1.0 - (static_cast<double>(largest_free) / total_free);
    }
    
    void defragment_pool(MemoryPool* pool) {
        std::lock_guard<std::mutex> lock(pool->pool_mutex);
        
        // 简单的内存整理:将所有空闲块移到内存池的末尾
        // 这里可以实现更复杂的整理算法
        
        // 统计所有空闲块的总大小
        size_t total_free_size = 0;
        std::vector<MemoryBlock*> free_blocks;
        
        MemoryBlock* current = pool->free_list;
        while (current) {
            total_free_size += current->size;
            free_blocks.push_back(current);
            current = current->next;
        }
        
        if (free_blocks.size() <= 1) return; // 无需整理
        
        // 创建一个新的大的空闲块
        // 这里需要更复杂的实现来确保数据安全
        // 简化实现:只合并相邻的空闲块
        coalesce_all_free_blocks(pool);
    }
    
    void coalesce_all_free_blocks(MemoryPool* pool) {
        // 将所有空闲块合并成一个大的空闲块
        // 这需要重新组织内存布局,实现较复杂
        // 这里提供概念性实现
        
        if (!pool->free_list) return;
        
        // 计算总空闲大小
        size_t total_free_size = 0;
        MemoryBlock* current = pool->free_list;
        while (current) {
            total_free_size += current->size;
            current = current->next;
        }
        
        // 重新创建单个大的空闲块
        // 这里需要处理内存重定位的复杂性
        // ...
    }
    
    void check_memory_leaks() const {
        if (!config_.enable_debug_info) return;
        
        size_t total_leaks = 0;
        size_t leaked_memory = 0;
        
        for (const auto& pool : pools_) {
            std::lock_guard<std::mutex> lock(pool->pool_mutex);
            
            MemoryBlock* current = pool->allocated_list;
            while (current) {
                if (!current->is_free) {
                    total_leaks++;
                    leaked_memory += current->size;
                    
                    if (config_.enable_debug_info) {
                        std::cout << "Memory leak detected: "
                                 << "Address: " << current->address << ", "
                                 << "Size: " << current->size << ", "
                                 << "Allocation site: " << current->allocation_site 
                                 << std::endl;
                    }
                }
                current = current->next;
            }
        }
        
        if (total_leaks > 0) {
            std::cerr << "WARNING: " << total_leaks << " memory leaks detected, "
                     << "total leaked memory: " << leaked_memory << " bytes"
                     << std::endl;
        }
    }
    
    void print_statistics() const {
        std::cout << "=== Memory Manager Statistics ===" << std::endl;
        std::cout << "Total allocated: " << stats_.total_allocated << " bytes" << std::endl;
        std::cout << "Total deallocated: " << stats_.total_deallocated << " bytes" << std::endl;
        std::cout << "Current usage: " << stats_.current_usage << " bytes" << std::endl;
        std::cout << "Peak usage: " << stats_.peak_usage << " bytes" << std::endl;
        std::cout << "Allocation count: " << stats_.allocation_count << std::endl;
        std::cout << "Deallocation count: " << stats_.deallocation_count << std::endl;
        std::cout << "Average allocation size: " << stats_.average_allocation_size << " bytes" << std::endl;
        std::cout << "Fragmentation ratio: " << (calculate_fragmentation() * 100) << "%" << std::endl;
        std::cout << "Number of pools: " << pools_.size() << std::endl;
        std::cout << "================================" << std::endl;
    }
};

26.3.2 对象池管理器

通用对象池
cpp 复制代码
// 通用对象池
template<typename T>
class ObjectPool {
private:
    struct PooledObject {
        alignas(T) std::byte storage[sizeof(T)];
        std::atomic<bool> in_use{false};
        std::chrono::system_clock::time_point last_used;
        PooledObject* next{nullptr};
        
        T* get_object() {
            return reinterpret_cast<T*>(storage);
        }
        
        const T* get_object() const {
            return reinterpret_cast<const T*>(storage);
        }
    };
    
    std::vector<std::unique_ptr<PooledObject>> objects_;
    std::atomic<PooledObject*> free_list_{nullptr};
    std::atomic<size_t> total_objects_{0};
    std::atomic<size_t> used_objects_{0};
    
    // 配置参数
    struct Config {
        size_t initial_size = 1024;
        size_t max_size = 10000;
        std::chrono::seconds object_ttl{300}; // 5分钟TTL
        bool enable_statistics = true;
        bool enable_garbage_collection = true;
        double expansion_factor = 2.0;
    };
    
    Config config_;
    
    // 性能统计
    struct Statistics {
        std::atomic<size_t> total_allocations{0};
        std::atomic<size_t> total_deallocations{0};
        std::atomic<size_t> cache_hits{0};
        std::atomic<size_t> cache_misses{0};
        std::atomic<size_t> expansions{0};
        std::atomic<double> hit_rate{0.0};
        std::atomic<double> utilization_rate{0.0};
    };
    
    Statistics stats_;
    
    // 垃圾收集器
    std::thread gc_thread_;
    std::atomic<bool> gc_running_{false};
    
public:
    explicit ObjectPool(const Config& config = Config{}) 
        : config_(config) {
        
        // 预分配对象
        preallocate_objects(config_.initial_size);
        
        if (config_.enable_garbage_collection) {
            start_garbage_collector();
        }
    }
    
    ~ObjectPool() {
        if (config_.enable_garbage_collection) {
            stop_garbage_collector();
        }
        
        if (config_.enable_statistics) {
            print_statistics();
        }
    }
    
    // 获取对象
    template<typename... Args>
    T* acquire(Args&&... args) {
        // 尝试从空闲列表获取
        PooledObject* obj = try_get_free_object();
        
        if (!obj) {
            // 没有空闲对象,尝试扩展池
            if (!expand_pool()) {
                return nullptr; // 池已满
            }
            obj = try_get_free_object();
        }
        
        if (obj) {
            // 构造对象
            T* object = new (obj->get_object()) T(std::forward<Args>(args)...);
            obj->in_use = true;
            obj->last_used = std::chrono::system_clock::now();
            
            used_objects_++;
            update_statistics_allocation();
            
            return object;
        }
        
        return nullptr;
    }
    
    // 释放对象
    void release(T* object) {
        if (!object) return;
        
        // 找到对应的池化对象
        PooledObject* pooled_obj = find_pooled_object(object);
        if (!pooled_obj) {
            throw std::invalid_argument("Object not from this pool");
        }
        
        // 析构对象
        object->~T();
        
        // 归还到空闲列表
        return_to_pool(pooled_obj);
        
        used_objects_--;
        update_statistics_deallocation();
    }
    
    // 获取统计信息
    Statistics get_statistics() const {
        return stats_;
    }
    
    // 获取池化率
    double get_utilization_rate() const {
        size_t total = total_objects_.load();
        size_t used = used_objects_.load();
        return total > 0 ? static_cast<double>(used) / total : 0.0;
    }
    
    // 手动触发垃圾收集
    void collect_garbage() {
        perform_garbage_collection();
    }
    
private:
    void preallocate_objects(size_t count) {
        objects_.reserve(count);
        
        for (size_t i = 0; i < count; ++i) {
            auto obj = std::make_unique<PooledObject>();
            obj->last_used = std::chrono::system_clock::now();
            
            // 添加到空闲列表
            obj->next = free_list_.load();
            while (!free_list_.compare_exchange_weak(obj->next, obj.get())) {
                // 自旋直到成功
            }
            
            objects_.push_back(std::move(obj));
        }
        
        total_objects_ = count;
    }
    
    PooledObject* try_get_free_object() {
        PooledObject* obj = free_list_.load();
        
        while (obj) {
            // 尝试原子地获取对象
            PooledObject* next = obj->next;
            
            if (free_list_.compare_exchange_weak(obj, next)) {
                // 成功获取对象
                if (!obj->in_use) {
                    stats_.cache_hits++;
                    return obj;
                } else {
                    // 对象状态不一致,重新放回
                    return_object_to_free_list(obj);
                }
            }
            
            // 竞争失败,重试
            obj = free_list_.load();
        }
        
        stats_.cache_misses++;
        return nullptr;
    }
    
    bool expand_pool() {
        size_t current_size = total_objects_.load();
        size_t new_size = std::min(
            static_cast<size_t>(current_size * config_.expansion_factor),
            config_.max_size
        );
        
        if (new_size <= current_size) {
            return false; // 已达到最大大小
        }
        
        size_t additional_objects = new_size - current_size;
        preallocate_objects(additional_objects);
        
        stats_.expansions++;
        return true;
    }
    
    PooledObject* find_pooled_object(T* object) {
        // 计算对象在池中的偏移
        std::ptrdiff_t offset = reinterpret_cast<std::byte*>(object) - 
                               reinterpret_cast<std::byte*>(objects_[0].get());
        
        // 检查是否在有效范围内
        if (offset < 0 || offset >= static_cast<std::ptrdiff_t>(
            objects_.size() * sizeof(PooledObject))) {
            return nullptr;
        }
        
        // 计算对应的池化对象
        size_t index = offset / sizeof(PooledObject);
        if (index >= objects_.size()) {
            return nullptr;
        }
        
        return objects_[index].get();
    }
    
    void return_to_pool(PooledObject* obj) {
        obj->in_use = false;
        return_object_to_free_list(obj);
    }
    
    void return_object_to_free_list(PooledObject* obj) {
        obj->next = free_list_.load();
        while (!free_list_.compare_exchange_weak(obj->next, obj)) {
            // 自旋直到成功
        }
    }
    
    void start_garbage_collector() {
        gc_running_ = true;
        gc_thread_ = std::thread([this]() {
            garbage_collection_loop();
        });
    }
    
    void stop_garbage_collector() {
        gc_running_ = false;
        if (gc_thread_.joinable()) {
            gc_thread_.join();
        }
    }
    
    void garbage_collection_loop() {
        while (gc_running_) {
            std::this_thread::sleep_for(std::chrono::minutes(1)); // 每分钟检查一次
            
            if (gc_running_) {
                perform_garbage_collection();
            }
        }
    }
    
    void perform_garbage_collection() {
        auto now = std::chrono::system_clock::now();
        
        // 检查过期对象
        for (auto& obj : objects_) {
            if (!obj->in_use) {
                auto elapsed = std::chrono::duration_cast<std::chrono::seconds>(
                    now - obj->last_used);
                
                if (elapsed > config_.object_ttl) {
                    // 对象过期,可以考虑清理
                    // 这里可以实现更复杂的清理逻辑
                }
            }
        }
        
        // 收缩池大小(如果使用率很低)
        double utilization = get_utilization_rate();
        if (utilization < 0.1 && total_objects_ > config_.initial_size) {
            // 使用率低于10%,考虑收缩
            shrink_pool();
        }
    }
    
    void shrink_pool() {
        // 实现池收缩逻辑
        // 需要谨慎处理,确保不影响正在使用的对象
        size_t target_size = std::max(
            static_cast<size_t>(total_objects_ * 0.8),
            config_.initial_size
        );
        
        // 这里可以实现具体的收缩逻辑
        // 例如,将未使用的对象移到数组末尾并释放内存
    }
    
    void update_statistics_allocation() {
        if (!config_.enable_statistics) return;
        
        stats_.total_allocations++;
        update_hit_rate();
        update_utilization_rate();
    }
    
    void update_statistics_deallocation() {
        if (!config_.enable_statistics) return;
        
        stats_.total_deallocations++;
        update_utilization_rate();
    }
    
    void update_hit_rate() {
        size_t hits = stats_.cache_hits.load();
        size_t misses = stats_.cache_misses.load();
        size_t total = hits + misses;
        
        if (total > 0) {
            stats_.hit_rate = static_cast<double>(hits) / total;
        }
    }
    
    void update_utilization_rate() {
        stats_.utilization_rate = get_utilization_rate();
    }
    
    void print_statistics() const {
        std::cout << "=== Object Pool Statistics ===" << std::endl;
        std::cout << "Total objects: " << total_objects_ << std::endl;
        std::cout << "Used objects: " << used_objects_ << std::endl;
        std::cout << "Utilization rate: " << (get_utilization_rate() * 100) << "%" << std::endl;
        std::cout << "Total allocations: " << stats_.total_allocations << std::endl;
        std::cout << "Total deallocations: " << stats_.total_deallocations << std::endl;
        std::cout << "Cache hits: " << stats_.cache_hits << std::endl;
        std::cout << "Cache misses: " << stats_.cache_misses << std::endl;
        std::cout << "Hit rate: " << (stats_.hit_rate * 100) << "%" << std::endl;
        std::cout << "Pool expansions: " << stats_.expansions << std::endl;
        std::cout << "===============================" << std::endl;
    }
};

26.4 并发处理框架

26.4.1 Actor模型实现

基础Actor框架
cpp 复制代码
// Actor模型核心实现
template<typename Message>
class ActorSystem {
public:
    using ActorId = uint64_t;
    using MessageHandler = std::function<void(const Message&)>;
    
private:
    struct ActorInfo {
        std::string name;
        std::queue<Message> mailbox;
        std::mutex mailbox_mutex;
        std::condition_variable mailbox_cv;
        MessageHandler handler;
        std::atomic<bool> active{true};
        std::thread actor_thread;
        std::chrono::system_clock::time_point creation_time;
        std::atomic<size_t> message_count{0};
    };
    
    std::unordered_map<ActorId, std::unique_ptr<ActorInfo>> actors_;
    std::atomic<ActorId> next_actor_id_{1};
    std::mutex actors_mutex_;
    
    // 系统统计
    struct SystemStatistics {
        std::atomic<size_t> total_actors{0};
        std::atomic<size_t> active_actors{0};
        std::atomic<size_t> total_messages{0};
        std::atomic<size_t> processed_messages{0};
        std::atomic<double> average_message_latency{0.0};
        std::atomic<size_t> dead_letter_count{0};
    };
    
    SystemStatistics stats_;
    
    // 死信队列
    struct DeadLetter {
        ActorId recipient;
        Message message;
        std::chrono::system_clock::time_point timestamp;
        std::string reason;
    };
    
    std::queue<DeadLetter> dead_letter_queue_;
    std::mutex dead_letter_mutex_;
    
public:
    ~ActorSystem() {
        shutdown();
    }
    
    // 创建Actor
    ActorId create_actor(const std::string& name, MessageHandler handler) {
        ActorId id = next_actor_id_++;
        
        auto actor_info = std::make_unique<ActorInfo>();
        actor_info->name = name;
        actor_info->handler = std::move(handler);
        actor_info->creation_time = std::chrono::system_clock::now();
        
        // 启动Actor线程
        actor_info->actor_thread = std::thread([this, id, actor_info_ptr = actor_info.get()]() {
            actor_loop(id, actor_info_ptr);
        });
        
        {
            std::lock_guard<std::mutex> lock(actors_mutex_);
            actors_[id] = std::move(actor_info);
        }
        
        stats_.total_actors++;
        stats_.active_actors++;
        
        return id;
    }
    
    // 发送消息
    void send_message(ActorId recipient, const Message& message) {
        stats_.total_messages++;
        
        std::shared_ptr<ActorInfo> actor_info;
        {
            std::lock_guard<std::mutex> lock(actors_mutex_);
            auto it = actors_.find(recipient);
            if (it != actors_.end() && it->second->active) {
                actor_info = std::shared_ptr<ActorInfo>(it->second.get(), 
                    [](ActorInfo*) {}); // 不拥有所有权
            }
        }
        
        if (actor_info) {
            // 投递消息到邮箱
            {
                std::lock_guard<std::mutex> lock(actor_info->mailbox_mutex);
                actor_info->mailbox.push(message);
            }
            actor_info->mailbox_cv.notify_one();
        } else {
            // Actor不存在或已停止,发送到死信队列
            handle_dead_letter(recipient, message, "Actor not found or inactive");
        }
    }
    
    // 广播消息给所有Actor
    void broadcast_message(const Message& message) {
        std::vector<ActorId> actor_ids;
        {
            std::lock_guard<std::mutex> lock(actors_mutex_);
            for (const auto& [id, actor_info] : actors_) {
                if (actor_info->active) {
                    actor_ids.push_back(id);
                }
            }
        }
        
        for (ActorId id : actor_ids) {
            send_message(id, message);
        }
    }
    
    // 停止指定Actor
    void stop_actor(ActorId id) {
        std::lock_guard<std::mutex> lock(actors_mutex_);
        auto it = actors_.find(id);
        if (it != actors_.end()) {
            it->second->active = false;
            it->second->mailbox_cv.notify_one(); // 唤醒线程以便退出
            it->second->actor_thread.join();
            actors_.erase(it);
            stats_.active_actors--;
        }
    }
    
    // 关闭整个系统
    void shutdown() {
        std::vector<std::thread> threads_to_join;
        {
            std::lock_guard<std::mutex> lock(actors_mutex_);
            for (auto& [id, actor_info] : actors_) {
                actor_info->active = false;
                actor_info->mailbox_cv.notify_one();
                threads_to_join.push_back(std::move(actor_info->actor_thread));
            }
            actors_.clear();
        }
        
        // 等待所有线程结束
        for (auto& thread : threads_to_join) {
            if (thread.joinable()) {
                thread.join();
            }
        }
        
        stats_.active_actors = 0;
    }
    
    // 获取系统统计信息
    SystemStatistics get_statistics() const {
        return stats_;
    }
    
    // 获取死信队列信息
    std::vector<DeadLetter> get_dead_letters() const {
        std::lock_guard<std::mutex> lock(dead_letter_mutex_);
        std::vector<DeadLetter> result;
        std::queue<DeadLetter> temp_queue = dead_letter_queue_;
        
        while (!temp_queue.empty()) {
            result.push_back(temp_queue.front());
            temp_queue.pop();
        }
        
        return result;
    }
    
private:
    void actor_loop(ActorId id, ActorInfo* actor_info) {
        while (actor_info->active) {
            std::unique_lock<std::mutex> lock(actor_info->mailbox_mutex);
            
            // 等待消息或超时
            actor_info->mailbox_cv.wait_for(lock, std::chrono::milliseconds(100),
                [actor_info]() { return !actor_info->mailbox.empty() || !actor_info->active; });
            
            if (!actor_info->active) {
                break;
            }
            
            // 处理所有待处理消息
            while (!actor_info->mailbox.empty()) {
                Message message = std::move(actor_info->mailbox.front());
                actor_info->mailbox.pop();
                lock.unlock();
                
                // 记录消息处理延迟
                auto start_time = std::chrono::system_clock::now();
                
                try {
                    // 调用消息处理器
                    actor_info->handler(message);
                    stats_.processed_messages++;
                } catch (const std::exception& e) {
                    handle_actor_exception(id, message, e.what());
                }
                
                auto end_time = std::chrono::system_clock::now();
                auto latency = std::chrono::duration_cast<std::chrono::microseconds>(
                    end_time - start_time);
                
                // 更新平均消息延迟
                update_average_latency(latency.count());
                
                lock.lock();
            }
        }
    }
    
    void handle_dead_letter(ActorId recipient, const Message& message, 
                           const std::string& reason) {
        std::lock_guard<std::mutex> lock(dead_letter_mutex_);
        
        DeadLetter dead_letter{
            recipient,
            message,
            std::chrono::system_clock::now(),
            reason
        };
        
        dead_letter_queue_.push(std::move(dead_letter));
        stats_.dead_letter_count++;
    }
    
    void handle_actor_exception(ActorId id, const Message& message, 
                               const std::string& error) {
        // 记录Actor异常信息
        std::cerr << "Actor " << id << " encountered error: " << error << std::endl;
        
        // 可以选择停止出错的Actor或采取其他恢复措施
        // stop_actor(id);
    }
    
    void update_average_latency(int64_t latency_microseconds) {
        // 使用指数移动平均更新平均延迟
        double current_avg = stats_.average_message_latency.load();
        double new_avg = current_avg * 0.9 + latency_microseconds * 0.1;
        stats_.average_message_latency = new_avg;
    }
};

26.4.2 并发数据结构

无锁队列
cpp 复制代码
// 无锁队列实现
template<typename T>
class LockFreeQueue {
private:
    struct Node {
        std::atomic<T*> data;
        std::atomic<Node*> next;
        
        Node() : data(nullptr), next(nullptr) {}
    };
    
    std::atomic<Node*> head_;
    std::atomic<Node*> tail_;
    std::atomic<size_t> size_{0};
    
public:
    LockFreeQueue() {
        Node* dummy = new Node();
        head_.store(dummy);
        tail_.store(dummy);
    }
    
    ~LockFreeQueue() {
        // 清理所有节点
        while (Node* old_head = head_.load()) {
            head_.store(old_head->next.load());
            delete old_head;
        }
    }
    
    // 入队操作
    bool enqueue(const T& value) {
        Node* new_node = new Node();
        T* new_data = new T(value);
        new_node->data.store(new_data);
        
        Node* prev_tail = tail_.load();
        
        while (true) {
            Node* tail_next = prev_tail->next.load();
            
            if (tail_next == nullptr) {
                // 尝试将新节点链接到队列尾部
                if (prev_tail->next.compare_exchange_weak(tail_next, new_node)) {
                    // 成功链接,尝试更新tail指针
                    tail_.compare_exchange_weak(prev_tail, new_node);
                    size_.fetch_add(1);
                    return true;
                }
            } else {
                // 其他线程已经添加了节点,帮助更新tail指针
                tail_.compare_exchange_weak(prev_tail, tail_next);
                prev_tail = tail_.load();
            }
        }
    }
    
    // 出队操作
    bool dequeue(T& result) {
        while (true) {
            Node* old_head = head_.load();
            Node* old_tail = tail_.load();
            Node* head_next = old_head->next.load();
            
            if (old_head == old_tail) {
                if (head_next == nullptr) {
                    // 队列为空
                    return false;
                }
                // 队列不一致,帮助更新tail指针
                tail_.compare_exchange_weak(old_tail, head_next);
            } else {
                if (head_next == nullptr) {
                    continue; // 重试
                }
                
                T* data = head_next->data.load();
                if (data == nullptr) {
                    continue; // 数据还未准备好
                }
                
                // 尝试移动head指针
                if (head_.compare_exchange_weak(old_head, head_next)) {
                    result = std::move(*data);
                    delete data;
                    delete old_head; // 释放旧的头节点
                    size_.fetch_sub(1);
                    return true;
                }
            }
        }
    }
    
    // 获取队列大小(近似值)
    size_t size() const {
        return size_.load();
    }
    
    // 检查队列是否为空
    bool empty() const {
        Node* head = head_.load();
        Node* tail = tail_.load();
        Node* head_next = head->next.load();
        
        return head == tail && head_next == nullptr;
    }
};
并发哈希表
cpp 复制代码
// 分段锁哈希表
template<typename Key, typename Value, typename Hash = std::hash<Key>>
class ConcurrentHashMap {
private:
    static constexpr size_t DEFAULT_SEGMENT_COUNT = 16;
    static constexpr size_t DEFAULT_SEGMENT_SIZE = 16;
    
    struct Node {
        Key key;
        Value value;
        std::atomic<Node*> next;
        
        Node(const Key& k, const Value& v) : key(k), value(v), next(nullptr) {}
    };
    
    struct Segment {
        mutable std::mutex mutex;
        std::atomic<Node*> buckets[DEFAULT_SEGMENT_SIZE];
        std::atomic<size_t> size{0};
        
        Segment() {
            for (auto& bucket : buckets) {
                bucket.store(nullptr);
            }
        }
        
        ~Segment() {
            for (auto& bucket : buckets) {
                Node* current = bucket.load();
                while (current) {
                    Node* next = current->next.load();
                    delete current;
                    current = next;
                }
            }
        }
    };
    
    std::vector<std::unique_ptr<Segment>> segments_;
    Hash hasher_;
    
public:
    explicit ConcurrentHashMap(size_t segment_count = DEFAULT_SEGMENT_COUNT,
                              const Hash& hash = Hash())
        : hasher_(hash) {
        
        segments_.reserve(segment_count);
        for (size_t i = 0; i < segment_count; ++i) {
            segments_.push_back(std::make_unique<Segment>());
        }
    }
    
    // 插入或更新键值对
    void insert_or_assign(const Key& key, const Value& value) {
        size_t segment_index = hasher_(key) % segments_.size();
        auto& segment = segments_[segment_index];
        
        std::lock_guard<std::mutex> lock(segment->mutex);
        
        size_t bucket_index = hasher_(key) % DEFAULT_SEGMENT_SIZE;
        Node* current = segment->buckets[bucket_index].load();
        
        // 查找现有节点
        while (current) {
            if (current->key == key) {
                // 找到现有键,更新值
                current->value = value;
                return;
            }
            current = current->next.load();
        }
        
        // 创建新节点并插入到链表头部
        Node* new_node = new Node(key, value);
        new_node->next = segment->buckets[bucket_index].load();
        segment->buckets[bucket_index].store(new_node);
        segment->size.fetch_add(1);
    }
    
    // 查找键值对
    bool find(const Key& key, Value& result) const {
        size_t segment_index = hasher_(key) % segments_.size();
        const auto& segment = segments_[segment_index];
        
        std::lock_guard<std::mutex> lock(segment->mutex);
        
        size_t bucket_index = hasher_(key) % DEFAULT_SEGMENT_SIZE;
        Node* current = segment->buckets[bucket_index].load();
        
        while (current) {
            if (current->key == key) {
                result = current->value;
                return true;
            }
            current = current->next.load();
        }
        
        return false;
    }
    
    // 删除键值对
    bool erase(const Key& key) {
        size_t segment_index = hasher_(key) % segments_.size();
        auto& segment = segments_[segment_index];
        
        std::lock_guard<std::mutex> lock(segment->mutex);
        
        size_t bucket_index = hasher_(key) % DEFAULT_SEGMENT_SIZE;
        Node* current = segment->buckets[bucket_index].load();
        Node* prev = nullptr;
        
        while (current) {
            if (current->key == key) {
                // 找到要删除的节点
                if (prev) {
                    prev->next = current->next.load();
                } else {
                    segment->buckets[bucket_index].store(current->next.load());
                }
                
                delete current;
                segment->size.fetch_sub(1);
                return true;
            }
            
            prev = current;
            current = current->next.load();
        }
        
        return false;
    }
    
    // 获取映射大小
    size_t size() const {
        size_t total_size = 0;
        for (const auto& segment : segments_) {
            total_size += segment->size.load();
        }
        return total_size;
    }
    
    // 清空映射
    void clear() {
        for (auto& segment : segments_) {
            std::lock_guard<std::mutex> lock(segment->mutex);
            
            for (auto& bucket : segment->buckets) {
                Node* current = bucket.load();
                while (current) {
                    Node* next = current->next.load();
                    delete current;
                    current = next;
                }
                bucket.store(nullptr);
            }
            
            segment->size.store(0);
        }
    }
    
    // 执行并发安全的遍历操作
    template<typename Func>
    void for_each(Func func) const {
        for (const auto& segment : segments_) {
            std::lock_guard<std::mutex> lock(segment->mutex);
            
            for (const auto& bucket : segment->buckets) {
                Node* current = bucket.load();
                while (current) {
                    func(current->key, current->value);
                    current = current->next.load();
                }
            }
        }
    }
};

26.5 综合项目架构

26.5.1 系统架构设计

微服务架构
cpp 复制代码
// 微服务注册中心
template<typename ServiceInterface>
class ServiceRegistry {
private:
    struct ServiceInstance {
        std::string service_id;
        std::string host;
        uint16_t port;
        std::string version;
        std::map<std::string, std::string> metadata;
        std::chrono::system_clock::time_point registration_time;
        std::chrono::system_clock::time_point last_heartbeat;
        std::atomic<bool> healthy{true};
    };
    
    std::unordered_map<std::string, std::vector<std::unique_ptr<ServiceInstance>>> services_;
    std::mutex services_mutex_;
    
    // 健康检查配置
    struct HealthCheckConfig {
        std::chrono::seconds heartbeat_timeout{30};
        std::chrono::seconds health_check_interval{10};
        size_t max_consecutive_failures = 3;
    };
    
    HealthCheckConfig health_config_;
    std::thread health_check_thread_;
    std::atomic<bool> running_{true};
    
public:
    ServiceRegistry() {
        start_health_checker();
    }
    
    ~ServiceRegistry() {
        running_ = false;
        if (health_check_thread_.joinable()) {
            health_check_thread_.join();
        }
    }
    
    // 注册服务
    std::string register_service(const std::string& service_name,
                                const std::string& host,
                                uint16_t port,
                                const std::string& version = "1.0.0",
                                const std::map<std::string, std::string>& metadata = {}) {
        
        auto instance = std::make_unique<ServiceInstance>();
        instance->service_id = generate_service_id();
        instance->host = host;
        instance->port = port;
        instance->version = version;
        instance->metadata = metadata;
        instance->registration_time = std::chrono::system_clock::now();
        instance->last_heartbeat = instance->registration_time;
        
        std::string service_id = instance->service_id;
        
        {
            std::lock_guard<std::mutex> lock(services_mutex_);
            services_[service_name].push_back(std::move(instance));
        }
        
        return service_id;
    }
    
    // 注销服务
    bool unregister_service(const std::string& service_name, const std::string& service_id) {
        std::lock_guard<std::mutex> lock(services_mutex_);
        
        auto it = services_.find(service_name);
        if (it == services_.end()) {
            return false;
        }
        
        auto& instances = it->second;
        instances.erase(
            std::remove_if(instances.begin(), instances.end(),
                          [&service_id](const std::unique_ptr<ServiceInstance>& instance) {
                              return instance->service_id == service_id;
                          }),
            instances.end()
        );
        
        if (instances.empty()) {
            services_.erase(it);
        }
        
        return true;
    }
    
    // 发送心跳
    bool send_heartbeat(const std::string& service_name, const std::string& service_id) {
        std::lock_guard<std::mutex> lock(services_mutex_);
        
        auto it = services_.find(service_name);
        if (it == services_.end()) {
            return false;
        }
        
        for (auto& instance : it->second) {
            if (instance->service_id == service_id) {
                instance->last_heartbeat = std::chrono::system_clock::now();
                instance->healthy = true;
                return true;
            }
        }
        
        return false;
    }
    
    // 发现服务实例
    std::vector<ServiceInstance> discover_service(const std::string& service_name,
                                                  bool only_healthy = true) {
        std::lock_guard<std::mutex> lock(services_mutex_);
        
        std::vector<ServiceInstance> result;
        auto it = services_.find(service_name);
        
        if (it != services_.end()) {
            for (const auto& instance : it->second) {
                if (!only_healthy || instance->healthy) {
                    result.push_back(*instance);
                }
            }
        }
        
        return result;
    }
    
    // 获取所有服务名称
    std::vector<std::string> get_service_names() const {
        std::lock_guard<std::mutex> lock(services_mutex_);
        
        std::vector<std::string> names;
        names.reserve(services_.size());
        
        for (const auto& [name, instances] : services_) {
            names.push_back(name);
        }
        
        return names;
    }
    
private:
    std::string generate_service_id() {
        static std::atomic<uint64_t> counter{0};
        return "service-" + std::to_string(counter.fetch_add(1));
    }
    
    void start_health_checker() {
        health_check_thread_ = std::thread([this]() {
            while (running_) {
                std::this_thread::sleep_for(health_config_.health_check_interval);
                
                if (!running_) break;
                
                perform_health_check();
            }
        });
    }
    
    void perform_health_check() {
        std::lock_guard<std::mutex> lock(services_mutex_);
        auto now = std::chrono::system_clock::now();
        
        for (auto& [service_name, instances] : services_) {
            for (auto& instance : instances) {
                auto time_since_heartbeat = std::chrono::duration_cast<std::chrono::seconds>(
                    now - instance->last_heartbeat);
                
                if (time_since_heartbeat > health_config_.heartbeat_timeout) {
                    instance->healthy = false;
                }
            }
        }
    }
};

26.5.2 性能监控与诊断

性能指标收集器
cpp 复制代码
// 性能指标收集器
class PerformanceCollector {
public:
    enum class MetricType {
        COUNTER,
        GAUGE,
        HISTOGRAM,
        TIMER
    };
    
    struct MetricValue {
        MetricType type;
        std::variant<int64_t, double, std::vector<double>> value;
        std::chrono::system_clock::time_point timestamp;
        std::map<std::string, std::string> tags;
        
        double get_numeric_value() const {
            if (type == MetricType::COUNTER || type == MetricType::GAUGE) {
                if (std::holds_alternative<int64_t>(value)) {
                    return static_cast<double>(std::get<int64_t>(value));
                } else if (std::holds_alternative<double>(value)) {
                    return std::get<double>(value);
                }
            }
            return 0.0;
        }
    };
    
private:
    std::unordered_map<std::string, std::vector<MetricValue>> metrics_;
    std::mutex metrics_mutex_;
    
    // 统计计算
    struct Statistics {
        double min;
        double max;
        double mean;
        double median;
        double percentile_95;
        double percentile_99;
        double standard_deviation;
        size_t sample_count;
    };
    
public:
    // 记录计数器
    void record_counter(const std::string& name, int64_t value,
                       const std::map<std::string, std::string>& tags = {}) {
        std::lock_guard<std::mutex> lock(metrics_mutex_);
        
        MetricValue metric{
            MetricType::COUNTER,
            value,
            std::chrono::system_clock::now(),
            tags
        };
        
        metrics_[name].push_back(std::move(metric));
    }
    
    // 记录仪表盘值
    void record_gauge(const std::string& name, double value,
                     const std::map<std::string, std::string>& tags = {}) {
        std::lock_guard<std::mutex> lock(metrics_mutex_);
        
        MetricValue metric{
            MetricType::GAUGE,
            value,
            std::chrono::system_clock::now(),
            tags
        };
        
        metrics_[name].push_back(std::move(metric));
    }
    
    // 记录直方图值
    void record_histogram(const std::string& name, double value,
                         const std::map<std::string, std::string>& tags = {}) {
        std::lock_guard<std::mutex> lock(metrics_mutex_);
        
        MetricValue metric{
            MetricType::HISTOGRAM,
            value,
            std::chrono::system_clock::now(),
            tags
        };
        
        metrics_[name].push_back(std::move(metric));
    }
    
    // 记录时间间隔
    template<typename Duration>
    void record_timer(const std::string& name, Duration duration,
                       const std::map<std::string, std::string>& tags = {}) {
        double microseconds = std::chrono::duration_cast<std::chrono::microseconds>(duration).count();
        record_histogram(name, microseconds, tags);
    }
    
    // 获取指标统计信息
    Statistics get_statistics(const std::string& name, 
                             const std::map<std::string, std::string>& tag_filter = {}) const {
        std::lock_guard<std::mutex> lock(metrics_mutex_);
        
        auto it = metrics_.find(name);
        if (it == metrics_.end()) {
            return {};
        }
        
        std::vector<double> values;
        for (const auto& metric : it->second) {
            bool match = true;
            for (const auto& [key, value] : tag_filter) {
                auto tag_it = metric.tags.find(key);
                if (tag_it == metric.tags.end() || tag_it->second != value) {
                    match = false;
                    break;
                }
            }
            
            if (match) {
                values.push_back(metric.get_numeric_value());
            }
        }
        
        return calculate_statistics(values);
    }
    
    // 导出所有指标
    std::string export_metrics() const {
        std::lock_guard<std::mutex> lock(metrics_mutex_);
        
        std::ostringstream oss;
        oss << "# Performance Metrics Report\n\n";
        
        for (const auto& [name, metric_values] : metrics_) {
            oss << "## " << name << "\n";
            
            if (!metric_values.empty()) {
                const auto& first_metric = metric_values.front();
                oss << "Type: " << metric_type_to_string(first_metric.type) << "\n";
                
                if (first_metric.type == MetricType::HISTOGRAM) {
                    std::vector<double> values;
                    for (const auto& metric : metric_values) {
                        values.push_back(metric.get_numeric_value());
                    }
                    
                    auto stats = calculate_statistics(values);
                    oss << "Sample Count: " << stats.sample_count << "\n";
                    oss << "Min: " << stats.min << "\n";
                    oss << "Max: " << stats.max << "\n";
                    oss << "Mean: " << stats.mean << "\n";
                    oss << "Median: " << stats.median << "\n";
                    oss << "95th Percentile: " << stats.percentile_95 << "\n";
                    oss << "99th Percentile: " << stats.percentile_99 << "\n";
                    oss << "Standard Deviation: " << stats.standard_deviation << "\n";
                } else {
                    double sum = 0.0;
                    for (const auto& metric : metric_values) {
                        sum += metric.get_numeric_value();
                    }
                    oss << "Count: " << metric_values.size() << "\n";
                    oss << "Sum: " << sum << "\n";
                    oss << "Average: " << (metric_values.empty() ? 0.0 : sum / metric_values.size()) << "\n";
                }
            }
            
            oss << "\n";
        }
        
        return oss.str();
    }
    
private:
    Statistics calculate_statistics(const std::vector<double>& values) const {
        if (values.empty()) {
            return {};
        }
        
        Statistics stats;
        stats.sample_count = values.size();
        
        // 计算基本统计量
        std::vector<double> sorted_values = values;
        std::sort(sorted_values.begin(), sorted_values.end());
        
        stats.min = sorted_values.front();
        stats.max = sorted_values.back();
        stats.median = calculate_percentile(sorted_values, 0.5);
        stats.percentile_95 = calculate_percentile(sorted_values, 0.95);
        stats.percentile_99 = calculate_percentile(sorted_values, 0.99);
        
        // 计算均值
        double sum = std::accumulate(values.begin(), values.end(), 0.0);
        stats.mean = sum / values.size();
        
        // 计算标准差
        double variance = 0.0;
        for (double value : values) {
            variance += (value - stats.mean) * (value - stats.mean);
        }
        variance /= values.size();
        stats.standard_deviation = std::sqrt(variance);
        
        return stats;
    }
    
    double calculate_percentile(const std::vector<double>& sorted_values, double percentile) const {
        if (sorted_values.empty()) return 0.0;
        
        size_t index = static_cast<size_t>(percentile * (sorted_values.size() - 1));
        return sorted_values[index];
    }
    
    std::string metric_type_to_string(MetricType type) const {
        switch (type) {
            case MetricType::COUNTER: return "Counter";
            case MetricType::GAUGE: return "Gauge";
            case MetricType::HISTOGRAM: return "Histogram";
            case MetricType::TIMER: return "Timer";
            default: return "Unknown";
        }
    }
};

26.6 项目总结与实践

26.6.1 架构设计原则

1. 分层架构原则
  • 表示层:用户界面和API接口
  • 业务层:业务逻辑处理
  • 数据层:数据存储和访问
  • 基础设施层:通用技术组件
2. 微服务设计原则
  • 单一职责:每个服务只负责一个业务功能
  • 服务自治:服务独立部署和扩展
  • 容错设计:服务隔离和降级机制
  • 去中心化:避免单点故障
3. 性能优化原则
  • 异步处理:提高系统吞吐量
  • 缓存策略:减少重复计算
  • 连接池:复用昂贵资源
  • 批量处理:减少网络开销

26.6.2 最佳实践总结

1. 内存管理最佳实践
  • 使用对象池减少内存分配开销
  • 实现智能指针管理资源生命周期
  • 采用内存池避免内存碎片
  • 定期检测和修复内存泄漏
2. 并发编程最佳实践
  • 优先使用无锁数据结构
  • 合理使用线程池管理线程
  • 避免死锁和竞态条件
  • 使用原子操作保证线程安全
3. 网络编程最佳实践
  • 使用事件驱动模型提高并发能力
  • 实现连接池复用网络连接
  • 采用异步I/O避免阻塞
  • 实现优雅的错误处理机制

26.6.3 性能调优策略

1. 系统级调优
  • CPU优化:合理设置线程数量
  • 内存优化:减少内存拷贝和分配
  • I/O优化:使用零拷贝技术
  • 网络优化:调整TCP参数
2. 应用级调优
  • 算法优化:选择合适的数据结构和算法
  • 缓存优化:实现多级缓存策略
  • 数据库优化:合理使用索引和查询优化
  • 代码优化:消除性能瓶颈

26.6.4 监控与运维

1. 监控指标
  • 系统指标:CPU、内存、磁盘、网络使用率
  • 应用指标:请求量、响应时间、错误率
  • 业务指标:用户活跃度、功能使用频率
2. 告警机制
  • 阈值告警:关键指标超过预设阈值
  • 趋势告警:指标异常变化趋势
  • 异常告警:系统异常行为检测

26.6.5 项目实践建议

1. 开发阶段
  • 从简单功能开始,逐步增加复杂度
  • 编写单元测试确保代码质量
  • 进行代码审查发现潜在问题
  • 使用版本控制管理代码变更
2. 测试阶段
  • 进行压力测试验证系统性能
  • 执行容错测试验证系统稳定性
  • 进行安全测试发现安全漏洞
  • 模拟真实场景进行集成测试
3. 部署阶段
  • 使用容器化技术简化部署
  • 实现蓝绿部署减少停机时间
  • 配置自动化部署流程
  • 建立完善的回滚机制
4. 运维阶段
  • 建立完善的监控体系
  • 制定应急响应预案
  • 定期进行系统维护
  • 持续优化系统性能

26.7 参考文献与延伸阅读

26.7.1 核心参考文献

  1. 分布式系统理论

    • Lamport, L. "Paxos Made Simple" (2001)
    • Ongaro, D. & Ousterhout, J. "In Search of an Understandable Consensus Algorithm" (2014)
    • Brewer, E. "CAP Twelve Years Later: How the "Rules" Have Changed" (2012)
  2. 并发编程

    • Herlihy, M. & Shavit, N. "The Art of Multiprocessor Programming" (2008)
    • Williams, A. "C++ Concurrency in Action" (2019)
    • Anthony Williams. "C++ Concurrency in Action" (2019)
  3. 内存管理

    • Wilson, P. et al. "Dynamic Storage Allocation: A Survey and Critical Review" (1995)
    • Berger, E. et al. "Hoard: A Scalable Memory Allocator for Multithreaded Applications" (2000)
    • Evans, J. "A Scalable Concurrent malloc Implementation" (2006)
  4. 网络编程

    • Stevens, W. et al. "UNIX Network Programming" (2004)
    • Schmidt, D. et al. "Pattern-Oriented Software Architecture: Patterns for Concurrent and Networked Objects" (2000)

26.7.2 延伸阅读材料

  1. 现代C++特性

    • Stroustrup, B. "A Tour of C++" (2018)
    • Josuttis, N. "C++ Templates: The Complete Guide" (2017)
    • Vandevoorde, D. & Josuttis, N. "C++ Templates: The Complete Guide" (2017)
  2. 系统架构设计

    • Newman, S. "Building Microservices" (2015)
    • Richards, M. "Software Architecture Patterns" (2015)
    • Kleppmann, M. "Designing Data-Intensive Applications" (2017)
  3. 性能优化

    • Sutter, H. & Alexandrescu, A. "C++ Coding Standards" (2004)
    • Agner Fog. "Optimizing Software in C++" (2020)
    • Goldthwaite, L. "The Software Optimization Cookbook" (2012)
  4. 开源项目参考

    • Apache Thrift: 跨语言服务框架
    • gRPC: 高性能RPC框架
    • Boost.Asio: 异步I/O库
    • TBB: Intel Threading Building Blocks

26.7.3 在线资源

  1. 技术文档

  2. 学术论文

  3. 开源社区

26.8 总结

本周我们深入探讨了系统级综合项目的理论基础和实践方法,涵盖了分布式系统架构、高性能网络服务器、内存管理、并发处理等核心领域。通过理论学习和代码实现,我们构建了一个完整的知识体系,为开发大型C++系统级应用奠定了坚实基础。

关键要点:

  1. 分布式系统理论:理解CAP定理、一致性算法和分布式事务处理机制
  2. 网络编程架构:掌握Reactor/Proactor模式、线程池设计和异步I/O技术
  3. 内存管理技术:实现高效的内存分配器、对象池和垃圾回收机制
  4. 并发编程模型:运用Actor模型、无锁数据结构和并发容器
  5. 系统监控诊断:建立完善的性能监控和故障诊断体系

实践建议:

  1. 循序渐进:从简单的子系统开始,逐步构建完整的系统
  2. 理论结合实践:在理解理论的基础上进行代码实现
  3. 性能优先:始终关注系统性能,进行必要的优化
  4. 容错设计:考虑各种异常情况,实现健壮的容错机制
  5. 持续改进:根据实际运行情况不断优化系统设计

通过本周的学习,我们完成了整个高级C++学习计划的最后一块拼图,为成为优秀的C++系统架构师打下了坚实的理论基础和实践能力。

相关推荐
SCandL1521 小时前
安全上下文的修改实验
linux
ragnwang1 小时前
Ubuntu /home 分区安全扩容教程
linux·运维·ubuntu
Azure++1 小时前
Centos安装clickhouse
linux·clickhouse·centos
濊繵1 小时前
Linux网络--应用层自定义协议与序列化
linux·服务器·网络
fpcc2 小时前
跟我学C++中级篇——重载问题分析之函数模板重载的问题
c++
仟濹2 小时前
【C/C++】经典高精度算法 5道题 加减乘除「复习」
c语言·c++·算法
潇凝子潇2 小时前
Linux 服务器实时监控Shell 脚本
linux·服务器·chrome
顾安r2 小时前
11.21 脚本 网页优化
linux·前端·javascript·算法·html
last demo2 小时前
iscsi服务器
linux·运维·服务器·php