灰阶超声不同机器学习影像组学模型在预测乳腺癌腋窝淋巴结转移中的应用价值

一、 研究背景与科学问题

  1. 临床核心问题:

    腋窝淋巴结(ALN)状态是乳腺癌最重要的预后因素之一,直接影响分期、治疗决策(如是否需要新辅助化疗、腋窝淋巴结清扫范围)和预后判断。目前,术前评估主要依赖超声引导下穿刺活检,但其为有创操作,且对微转移、非肿大转移淋巴结敏感性有限。

  2. 科学机遇与挑战:

    灰阶超声(B超) 是乳腺癌术前常规检查,其图像不仅反映原发肿瘤的形态,其内部及周边的回声特征也可能蕴含着与淋巴结转移潜能相关的生物学信息(如肿瘤侵袭性、血管生成、间质反应)。

    影像组学 可高通量提取肿瘤的定量特征,超越人眼观察极限,为无创预测ALN转移提供了可能。

    关键挑战: 不同的机器学习算法具有不同的数据拟合能力和解释性。目前缺乏系统性比较,以确定针对此特定任务(基于灰阶超声预测ALN转移)的最优模型或模型组合。

  3. 核心科学问题:

    基于乳腺癌原发灶的术前灰阶超声图像,采用不同的机器学习算法构建影像组学模型,它们在预测腋窝淋巴结转移方面的诊断效能如何?哪种算法或算法组合能提供最佳的性能、稳定性和临床适用性?

二、 研究目标

  1. 主要目标: 系统性构建并比较多种基于灰阶超声影像组学特征的机器学习模型(如逻辑回归、支持向量机、随机森林、XGBoost、深度学习CNN等),评估其对乳腺癌腋窝淋巴结转移的预测价值。
  2. 次要目标:
    筛选与ALN转移高度相关的稳定影像组学特征,并探讨其潜在生物学意义。
    构建一个综合影像组学标签、临床病理特征(如肿瘤大小、Ki-67)的多模态融合列线图,并评估其临床实用性。
    探讨模型对不同分子亚型乳腺癌的预测效能是否存在差异。

三、 研究方法与技术路线

阶段一:高质量数据集构建与金标准

研究对象: 经穿刺活检确诊为乳腺癌,术前行乳腺及腋窝灰阶超声检查,并随后接受腋窝淋巴结清扫或前哨淋巴结活检的患者。

金标准: 以术后病理结果为金标准,将患者分为两组:淋巴结转移阳性(pN+)组 与 淋巴结转移阴性(pN0)组。可进一步按转移负荷(微转移、宏转移)分层分析。

排除标准: 新辅助治疗后者、图像质量差、既往同侧腋窝手术史。

阶段二:影像处理、特征提取与工程

  1. 图像采集与预处理:
    收集所有患者术前的原始灰阶超声DICOM图像。选取肿瘤最大纵切面和横切面图像(或融合3D数据更佳)。
    进行图像标准化(重采样至统一像素间距、灰度归一化),减少设备与参数差异。
  2. 肿瘤区域分割(ROI):
    关键步骤: 由两名经验丰富的超声科医师在不知病理结果的情况下,手动精确勾画肿瘤边界(包括所有不规则区域和毛刺)。可引入半自动分割工具辅助,并计算观察者间一致性(ICC>0.75)。
  3. 影像组学特征提取:
    使用PyRadiomics等平台,从每个ROI中提取大量特征:
    形态特征: 肿瘤形状的规则性、分叶状、毛刺征(通过特征量化)。
    一阶统计特征: 回声强度的分布(均值、偏度、峰度等)。偏度可能反映内部坏死或钙化。
    纹理特征(核心):
    GLCM: 对比度(异质性)、能量(均匀性)、相关性。高异质性(高对比度、低能量)常与更具侵袭性的表型相关,可能预示更高转移风险。
    GLRLM/GLSZM: 反映纹理的粗糙度和同质区域大小。
    高阶滤波特征: 小波变换等提取的多尺度纹理特征,可能捕捉更细微的模式。
  4. 特征筛选与降维:
    通过ICC值筛选稳定特征。
    使用LASSO回归或递归特征消除(RFE) 等结合交叉验证的方法,从高维特征中选择最具预测力的特征子集,避免过拟合。

阶段三:多模型构建、训练与比较(核心)

  1. 算法选择与模型构建:
    传统机器学习模型:
    逻辑回归: 基线模型,可解释性强。
    支持向量机: 适用于高维数据,擅长处理非线性关系(使用RBF核)。
    随机森林: 集成方法,能评估特征重要性,对噪声相对稳健。
    XGBoost/LightGBM: 梯度提升树,当前表格数据竞赛中表现优异的算法。
    深度学习模型(如果数据量足够大):
    卷积神经网络: 使用预训练网络(如ResNet)进行迁移学习,或设计轻量级CNN,直接从图像像素学习特征,无需手动提取特征。
  2. 实验设计:
    将数据集按7:1:2或类似比例随机分为训练集、调优集(可选)和内部验证集。
    在训练集上,对所有模型使用相同的特征子集(对于传统ML)或原始图像(对于CNN),并通过五折交叉验证进行超参数调优。
    在独立的内部验证集上,公平地比较所有模型的性能。
  3. 模型评估与比较指标:
    主要指标: 受试者工作特征曲线下面积(AUC)、准确率、灵敏度、特异度、F1-score。
    统计比较: 使用DeLong检验比较不同模型AUC的统计学差异。
    综合评估: 同时考虑模型的校准度(校准曲线)、临床净获益(决策曲线分析DCA)及计算复杂度。

阶段四:模型解释、融合与临床转化

  1. 特征与模型解释:
    使用SHAP或LIME解释最佳模型,可视化关键特征如何影响预测,增强临床可信度。
    分析重要特征与已知临床病理因素(如组织学分级、分子分型)的相关性。
  2. 构建融合模型/列线图:
    将表现最佳的影像组学模型输出的"影像组学评分(Rad-score)",与独立的临床危险因素(如肿瘤大小、年龄、超声报告的淋巴结形态)结合,通过多变量逻辑回归构建一个临床-影像组学融合列线图。
    验证该列线图是否能显著提高单纯临床模型或单纯影像组学模型的预测性能。
  3. 亚组分析与外部验证:
    在不同分子亚型(Luminal A/B, HER2+, Triple-negative)中测试模型的稳定性。
    强烈建议寻求外部验证(来自不同医院的数据),以证明模型的泛化能力。

四、 预期结果与创新点

预期结果:

  1. 成功构建多个高性能预测模型,其中集成学习模型(如XGBoost)或深度学习模型有望取得最高AUC(例如>0.85)。

  2. 发现一组稳定的、与ALN转移强相关的"超声影像组学特征标签",如表征肿瘤内部异质性(高熵、低能量)和边缘不规则性(高形态复杂度)的特征。

  3. 临床-影像组学融合列线图显示出最佳的预测效能和临床实用性。

创新点:

  1. 系统性模型比较研究: 不仅是构建单个模型,而是系统性对比不同机器学习范式在解决同一具体临床问题上的优劣,为后续研究提供算法选择依据。

  2. 聚焦灰阶超声: 强调仅利用最普及、最基础的灰阶超声图像,使研究成果具有极高的临床普适性和推广潜力。

  3. 从预测到解释: 不仅追求高精度,更通过可解释性AI方法阐明模型决策依据,推动影像组学从"黑箱"走向"透明"。

  4. 直接服务于临床决策: 模型可用于筛选ALN转移低风险患者,避免不必要的穿刺活检;或识别高风险患者,加强术前评估。

五、 挑战与展望

挑战:

样本量需求: 尤其是对于深度学习模型,需要足够大的数据集。

标签噪声: 前哨淋巴结活检可能遗漏少量微转移,影响金标准纯度。

肿瘤异质性: 单一切面图像可能无法代表整个肿瘤。

展望:

融合多模态影像: 结合弹性成像、超声造影的影像组学特征。

联合原发灶与淋巴结特征: 同时提取原发肿瘤和可疑淋巴结的影像组学特征进行联合预测。

前瞻性临床试验: 验证模型在真实世界临床路径中指导决策的安全性和有效性。

总结:

本研究通过头对头比较多种先进的机器学习算法,旨在确立基于灰阶超声影像组学预测乳腺癌腋窝淋巴结转移的最优技术路径。研究成果将为实现乳腺癌术前无创、精准的淋巴结分期提供强有力的客观工具,是精准医疗在乳腺癌诊疗中的一次重要实践,具有明确的科学价值和广阔的临床应用前景。

提供一个代码框架,用于比较不同机器学习模型在基于灰阶超声影像组学预测乳腺癌腋窝淋巴结转移中的性能。

python 复制代码
# -*- coding: utf-8 -*-
"""
乳腺癌腋窝淋巴结转移预测模型比较研究
灰阶超声影像组学结合多种机器学习算法的系统比较
"""

# ==================== 1. 环境配置与库导入 ====================
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
import warnings
warnings.filterwarnings('ignore')
plt.style.use('seaborn-v0_8-darkgrid')
sns.set_palette("husl")

# 医学影像处理
import SimpleITK as sitk
import cv2
import pydicom
from radiomics import featureextractor
from scipy import ndimage
from scipy.ndimage import gaussian_filter

# 机器学习库
from sklearn.model_selection import (train_test_split, StratifiedKFold, 
                                   cross_val_score, GridSearchCV, 
                                   RandomizedSearchCV)
from sklearn.preprocessing import StandardScaler, RobustScaler, MinMaxScaler
from sklearn.decomposition import PCA
from sklearn.manifold import TSNE
from sklearn.feature_selection import (SelectKBest, RFE, RFECV, 
                                     f_classif, mutual_info_classif, 
                                     VarianceThreshold)

# 机器学习模型
from sklearn.linear_model import LogisticRegression
from sklearn.svm import SVC
from sklearn.ensemble import (RandomForestClassifier, GradientBoostingClassifier,
                            AdaBoostClassifier, ExtraTreesClassifier,
                            VotingClassifier, StackingClassifier)
from sklearn.tree import DecisionTreeClassifier
from sklearn.neighbors import KNeighborsClassifier
from sklearn.naive_bayes import GaussianNB
from sklearn.discriminant_analysis import (LinearDiscriminantAnalysis,
                                         QuadraticDiscriminantAnalysis)
from xgboost import XGBClassifier
from lightgbm import LGBMClassifier
from catboost import CatBoostClassifier

# 深度学习
import torch
import torch.nn as nn
import torch.nn.functional as F
from torch.utils.data import Dataset, DataLoader
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers

# 模型评估指标
from sklearn.metrics import (accuracy_score, precision_score, recall_score,
                           f1_score, roc_auc_score, average_precision_score,
                           confusion_matrix, classification_report,
                           roc_curve, precision_recall_curve,
                           cohen_kappa_score, matthews_corrcoef,
                           balanced_accuracy_score)

# 统计分析与可视化
import scipy.stats as stats
from scipy.stats import mannwhitneyu, chi2_contingency
import statsmodels.api as sm
from statsmodels.stats.multitest import multipletests
import plotly.graph_objects as go
import plotly.express as px
from plotly.subplots import make_subplots
import plotly.figure_factory as ff

# 高级工具
import shap
import lime
import lime.lime_tabular
from imblearn.over_sampling import SMOTE, ADASYN
from imblearn.under_sampling import RandomUnderSampler
from imblearn.combine import SMOTEENN, SMOTETomek
import joblib
import json
import pickle
from tqdm import tqdm
import time
from datetime import datetime

# ==================== 2. 数据加载与预处理类 ====================
class BreastCancerDataLoader:
    """乳腺癌数据加载与预处理类"""
    
    def __init__(self, data_path, clinical_csv=None):
        """
        初始化数据加载器
        
        参数:
        data_path: 影像组学特征文件路径或图像目录
        clinical_csv: 临床数据CSV文件路径(可选)
        """
        self.data_path = data_path
        self.clinical_csv = clinical_csv
        
    def load_features_from_csv(self, feature_csv, label_col='LN_Status'):
        """从CSV文件加载影像组学特征"""
        print("加载影像组学特征...")
        
        # 读取特征数据
        self.features_df = pd.read_csv(feature_csv)
        
        # 检查必要的列
        required_cols = ['PatientID', label_col]
        missing_cols = [col for col in required_cols if col not in self.features_df.columns]
        
        if missing_cols:
            raise ValueError(f"特征文件中缺少必要列: {missing_cols}")
        
        # 分离特征和标签
        self.X = self.features_df.drop(required_cols, axis=1, errors='ignore')
        self.y = self.features_df[label_col].map({'Positive': 1, 'Negative': 0})
        
        print(f"加载特征形状: {self.X.shape}")
        print(f"标签分布:\n{self.features_df[label_col].value_counts()}")
        
        return self.X, self.y
    
    def load_clinical_data(self):
        """加载临床数据"""
        if not self.clinical_csv:
            print("未提供临床数据文件")
            return None
        
        clinical_df = pd.read_csv(self.clinical_csv)
        print(f"加载临床数据: {clinical_df.shape}")
        
        # 常见临床特征
        clinical_features = [
            'Age', 'Tumor_Size', 'Tumor_Location', 'Histological_Grade',
            'ER_Status', 'PR_Status', 'HER2_Status', 'Ki67_Index',
            'Molecular_Subtype', 'Multifocality'
        ]
        
        available_features = [f for f in clinical_features if f in clinical_df.columns]
        print(f"可用的临床特征: {available_features}")
        
        return clinical_df
    
    def merge_clinical_features(self, X_features, clinical_df, patient_id_col='PatientID'):
        """合并影像组学特征和临床特征"""
        if clinical_df is None:
            return X_features
        
        # 确保有PatientID列
        if patient_id_col not in X_features.columns and 'PatientID' in X_features.index.name:
            X_features = X_features.reset_index()
        
        # 合并数据
        merged_df = pd.merge(
            X_features,
            clinical_df,
            on=patient_id_col,
            how='inner'
        )
        
        print(f"合并后特征形状: {merged_df.shape}")
        
        return merged_df
    
    def extract_radiomics_from_images(self, image_dir, mask_dir, params_file=None):
        """从原始图像提取影像组学特征"""
        print("从原始图像提取影像组学特征...")
        
        # 获取图像文件列表
        image_files = sorted([f for f in os.listdir(image_dir) if f.endswith(('.dcm', '.png', '.jpg', '.bmp'))])
        mask_files = sorted([f for f in os.listdir(mask_dir) if f.endswith(('.dcm', '.png', '.jpg', '.bmp'))])
        
        if len(image_files) != len(mask_files):
            raise ValueError("图像和掩码数量不匹配")
        
        # 初始化特征提取器
        if params_file and os.path.exists(params_file):
            extractor = featureextractor.RadiomicsFeatureExtractor(params_file)
        else:
            extractor = featureextractor.RadiomicsFeatureExtractor()
            # 配置参数
            settings = {
                'binWidth': 25,
                'resampledPixelSpacing': None,
                'interpolator': 'sitkBSpline',
                'padDistance': 5,
            }
            for key, value in settings.items():
                extractor.settings[key] = value
        
        features_list = []
        
        for img_file, msk_file in tqdm(zip(image_files, mask_files), total=len(image_files)):
            try:
                # 加载图像和掩码
                img_path = os.path.join(image_dir, img_file)
                msk_path = os.path.join(mask_dir, msk_file)
                
                # 读取图像
                if img_file.endswith('.dcm'):
                    ds = pydicom.dcmread(img_path)
                    image = ds.pixel_array
                else:
                    image = cv2.imread(img_path, cv2.IMREAD_GRAYSCALE)
                
                # 读取掩码
                if msk_file.endswith('.dcm'):
                    ds_mask = pydicom.dcmread(msk_path)
                    mask = ds_mask.pixel_array
                else:
                    mask = cv2.imread(msk_path, cv2.IMREAD_GRAYSCALE)
                
                # 转换为SimpleITK图像
                image_sitk = sitk.GetImageFromArray(image.astype(np.float32))
                mask_sitk = sitk.GetImageFromArray(mask.astype(np.uint8))
                
                # 提取特征
                features = extractor.execute(image_sitk, mask_sitk)
                
                # 转换为字典
                feature_dict = {}
                for key, value in features.items():
                    if key.startswith('original_'):
                        feature_name = key.replace('original_', '')
                        feature_dict[feature_name] = value
                
                # 添加自定义特征
                custom_feats = self._extract_custom_features(image, mask)
                feature_dict.update(custom_feats)
                
                feature_dict['PatientID'] = img_file.split('.')[0]
                features_list.append(feature_dict)
                
            except Exception as e:
                print(f"处理文件 {img_file} 时出错: {str(e)}")
        
        features_df = pd.DataFrame(features_list)
        print(f"提取特征完成: {features_df.shape}")
        
        return features_df
    
    def _extract_custom_features(self, image, mask):
        """提取自定义特征(针对乳腺癌淋巴结转移设计)"""
        features = {}
        
        if mask.sum() == 0:
            return features
        
        # 获取肿瘤区域
        tumor_region = image[mask > 0]
        
        # 1. 肿瘤形态特征
        contours, _ = cv2.findContours(mask.astype(np.uint8), cv2.RETR_EXTERNAL, cv2.CHAIN_APPROX_SIMPLE)
        if contours:
            contour = contours[0]
            
            # 面积和周长
            area = cv2.contourArea(contour)
            perimeter = cv2.arcLength(contour, True)
            
            features['tumor_area'] = area
            features['tumor_perimeter'] = perimeter
            features['circularity'] = (4 * np.pi * area) / (perimeter ** 2) if perimeter > 0 else 0
            
            # 凸包特征
            hull = cv2.convexHull(contour)
            hull_area = cv2.contourArea(hull)
            hull_perimeter = cv2.arcLength(hull, True)
            features['solidity'] = area / hull_area if hull_area > 0 else 0
            features['convexity'] = hull_perimeter / perimeter if perimeter > 0 else 0
            
            # 椭圆拟合
            if len(contour) >= 5:
                ellipse = cv2.fitEllipse(contour)
                (center, axes, orientation) = ellipse
                major_axis = max(axes)
                minor_axis = min(axes)
                features['aspect_ratio'] = major_axis / minor_axis if minor_axis > 0 else 0
                features['ellipticity'] = 1 - (minor_axis / major_axis) if major_axis > 0 else 0
        
        # 2. 肿瘤边缘特征(毛刺征)
        # 计算距离变换
        dist_transform = cv2.distanceTransform(mask.astype(np.uint8), cv2.DIST_L2, 5)
        features['spiculation_index'] = np.std(dist_transform[mask > 0])
        
        # 3. 回声强度分布特征
        features['echo_mean'] = np.mean(tumor_region)
        features['echo_std'] = np.std(tumor_region)
        features['echo_skewness'] = stats.skew(tumor_region.flatten())
        features['echo_kurtosis'] = stats.kurtosis(tumor_region.flatten())
        
        # 4. 微钙化特征(高频成分)
        f_transform = np.fft.fft2(tumor_region)
        f_shift = np.fft.fftshift(f_transform)
        magnitude_spectrum = np.abs(f_shift)
        
        # 计算高频能量
        h, w = tumor_region.shape
        center_h, center_w = h // 2, w // 2
        radius = min(center_h, center_w) // 2
        
        # 创建圆形掩码
        y, x = np.ogrid[-center_h:h-center_h, -center_w:w-center_w]
        mask_circle = x*x + y*y <= radius*radius
        
        # 低频能量
        low_freq_energy = np.sum(magnitude_spectrum[mask_circle])
        # 高频能量
        high_freq_energy = np.sum(magnitude_spectrum[~mask_circle])
        
        features['high_freq_ratio'] = high_freq_energy / (low_freq_energy + high_freq_energy + 1e-10)
        
        return features

# ==================== 3. 特征工程与选择类 ====================
class FeatureEngineer:
    """特征工程与选择类"""
    
    def __init__(self, n_features_to_select=30, random_state=42):
        self.n_features_to_select = n_features_to_select
        self.random_state = random_state
        self.selected_features = None
        self.scaler = None
        self.pca = None
        
    def preprocess_features(self, X, y=None, handle_missing=True, scale_features=True):
        """预处理特征"""
        X_processed = X.copy()
        
        # 1. 处理缺失值
        if handle_missing:
            missing_ratio = X_processed.isnull().sum() / len(X_processed)
            features_to_drop = missing_ratio[missing_ratio > 0.3].index.tolist()
            
            if features_to_drop:
                print(f"删除缺失率>30%的特征: {len(features_to_drop)}个")
                X_processed = X_processed.drop(columns=features_to_drop)
            
            # 填充剩余缺失值
            for col in X_processed.columns:
                if X_processed[col].isnull().any():
                    if X_processed[col].dtype in ['float64', 'int64']:
                        X_processed[col].fillna(X_processed[col].median(), inplace=True)
                    else:
                        X_processed[col].fillna(X_processed[col].mode()[0], inplace=True)
        
        # 2. 移除零方差特征
        selector = VarianceThreshold(threshold=0.01)
        X_var = selector.fit_transform(X_processed)
        selected_features = X_processed.columns[selector.get_support()].tolist()
        X_processed = pd.DataFrame(X_var, columns=selected_features, index=X_processed.index)
        
        print(f"移除低方差特征后: {X_processed.shape[1]}个特征")
        
        # 3. 特征缩放
        if scale_features:
            self.scaler = StandardScaler()
            X_scaled = self.scaler.fit_transform(X_processed)
            X_processed = pd.DataFrame(X_scaled, columns=X_processed.columns, index=X_processed.index)
        
        return X_processed
    
    def select_features_univariate(self, X, y, k=50):
        """单变量特征选择"""
        print("进行单变量特征选择...")
        
        # 使用ANOVA F值
        selector = SelectKBest(score_func=f_classif, k=min(k, X.shape[1]))
        X_selected = selector.fit_transform(X, y)
        
        # 获取选择的特征
        selected_mask = selector.get_support()
        selected_features = X.columns[selected_mask].tolist()
        
        # 获取特征得分
        feature_scores = pd.DataFrame({
            'feature': X.columns,
            'f_score': selector.scores_,
            'p_value': selector.pvalues_
        }).sort_values('f_score', ascending=False)
        
        # 多重检验校正
        rejected, pvals_corrected, _, _ = multipletests(
            feature_scores['p_value'], 
            alpha=0.05, 
            method='fdr_bh'
        )
        feature_scores['p_value_corrected'] = pvals_corrected
        feature_scores['significant'] = rejected
        
        print(f"单变量选择后: {len(selected_features)}个特征")
        print(f"显著特征: {feature_scores['significant'].sum()}个")
        
        return X_selected, selected_features, feature_scores
    
    def select_features_rfe(self, X, y, estimator=None, step=1, cv=5):
        """递归特征消除(RFE)"""
        print("进行递归特征消除...")
        
        if estimator is None:
            estimator = RandomForestClassifier(
                n_estimators=100,
                random_state=self.random_state,
                n_jobs=-1
            )
        
        selector = RFECV(
            estimator=estimator,
            step=step,
            cv=cv,
            scoring='roc_auc',
            min_features_to_select=10,
            n_jobs=-1
        )
        
        selector.fit(X, y)
        
        # 获取选择的特征
        selected_features = X.columns[selector.support_].tolist()
        X_selected = X[selected_features]
        
        print(f"RFE选择后: {len(selected_features)}个特征")
        print(f"最优特征数: {selector.n_features_}")
        
        # 绘制特征数量与性能关系
        plt.figure(figsize=(10, 6))
        plt.plot(range(1, len(selector.grid_scores_) + 1), selector.grid_scores_, marker='o')
        plt.xlabel('特征数量')
        plt.ylabel('交叉验证AUC')
        plt.title('RFE特征选择曲线')
        plt.grid(True)
        plt.show()
        
        return X_selected, selected_features, selector
    
    def select_features_lasso(self, X, y, alpha=0.01):
        """LASSO特征选择"""
        print("进行LASSO特征选择...")
        
        from sklearn.linear_model import LassoCV
        
        # 使用LASSO回归
        lasso = LassoCV(cv=5, random_state=self.random_state, max_iter=10000)
        lasso.fit(X, y)
        
        # 获取非零系数特征
        coef = pd.DataFrame({
            'feature': X.columns,
            'coef': lasso.coef_
        })
        
        selected_features = coef[coef['coef'] != 0]['feature'].tolist()
        X_selected = X[selected_features]
        
        print(f"LASSO选择后: {len(selected_features)}个特征")
        
        return X_selected, selected_features, coef
    
    def select_features_ensemble(self, X, y, methods=['univariate', 'rfe', 'lasso'], 
                               voting_threshold=2):
        """集成特征选择"""
        print("进行集成特征选择...")
        
        all_selected_features = []
        
        # 应用不同的特征选择方法
        if 'univariate' in methods:
            _, features_uni, _ = self.select_features_univariate(X, y, k=50)
            all_selected_features.append(set(features_uni))
        
        if 'rfe' in methods:
            _, features_rfe, _ = self.select_features_rfe(X, y)
            all_selected_features.append(set(features_rfe))
        
        if 'lasso' in methods:
            _, features_lasso, _ = self.select_features_lasso(X, y)
            all_selected_features.append(set(features_lasso))
        
        if 'rf' in methods:
            _, features_rf, _ = self.select_features_random_forest(X, y)
            all_selected_features.append(set(features_rf))
        
        # 投票选择特征
        feature_votes = {}
        for feature_set in all_selected_features:
            for feature in feature_set:
                feature_votes[feature] = feature_votes.get(feature, 0) + 1
        
        # 选择被多个方法选中的特征
        selected_features = [feature for feature, votes in feature_votes.items() 
                           if votes >= voting_threshold]
        
        print(f"集成选择后: {len(selected_features)}个特征")
        print(f"各方法选择特征数: {[len(f) for f in all_selected_features]}")
        
        return X[selected_features], selected_features, feature_votes
    
    def select_features_random_forest(self, X, y, importance_threshold=0.01):
        """随机森林特征选择"""
        print("进行随机森林特征选择...")
        
        rf = RandomForestClassifier(
            n_estimators=200,
            max_depth=10,
            random_state=self.random_state,
            n_jobs=-1,
            class_weight='balanced'
        )
        
        rf.fit(X, y)
        
        # 获取特征重要性
        importances = pd.DataFrame({
            'feature': X.columns,
            'importance': rf.feature_importances_
        }).sort_values('importance', ascending=False)
        
        # 选择重要性超过阈值的特征
        selected_features = importances[importances['importance'] > importance_threshold]['feature'].tolist()
        X_selected = X[selected_features]
        
        print(f"随机森林选择后: {len(selected_features)}个特征")
        
        return X_selected, selected_features, importances
    
    def apply_pca(self, X, n_components=None, variance_threshold=0.95):
        """应用PCA降维"""
        print("应用PCA降维...")
        
        if n_components is None:
            # 基于解释方差确定组件数
            self.pca = PCA()
            self.pca.fit(X)
            
            # 计算累积解释方差
            cumulative_variance = np.cumsum(self.pca.explained_variance_ratio_)
            n_components = np.argmax(cumulative_variance >= variance_threshold) + 1
            
            print(f"选择{n_components}个主成分,解释{variance_threshold*100:.1f}%的方差")
        
        # 应用PCA
        self.pca = PCA(n_components=n_components)
        X_pca = self.pca.fit_transform(X)
        
        print(f"PCA降维后: {X_pca.shape}")
        print(f"解释方差比: {self.pca.explained_variance_ratio_.sum():.3f}")
        
        # 创建特征名称
        pca_features = [f'PC{i+1}' for i in range(n_components)]
        X_pca_df = pd.DataFrame(X_pca, columns=pca_features, index=X.index)
        
        return X_pca_df

# ==================== 4. 机器学习模型比较类 ====================
class ModelComparator:
    """机器学习模型比较类"""
    
    def __init__(self, random_state=42, n_jobs=-1):
        self.random_state = random_state
        self.n_jobs = n_jobs
        self.models = {}
        self.results = {}
        self.best_model = None
        self.best_model_name = None
        
    def create_all_models(self, include_dl=True):
        """创建所有要比较的模型"""
        models = {}
        
        # 1. 线性模型
        models['LogisticRegression'] = LogisticRegression(
            penalty='l2',
            C=1.0,
            solver='liblinear',
            random_state=self.random_state,
            class_weight='balanced',
            max_iter=1000
        )
        
        models['LogisticRegression_L1'] = LogisticRegression(
            penalty='l1',
            C=1.0,
            solver='saga',
            random_state=self.random_state,
            class_weight='balanced',
            max_iter=1000
        )
        
        # 2. 支持向量机
        models['SVM_Linear'] = SVC(
            kernel='linear',
            C=1.0,
            probability=True,
            random_state=self.random_state,
            class_weight='balanced'
        )
        
        models['SVM_RBF'] = SVC(
            kernel='rbf',
            C=1.0,
            gamma='scale',
            probability=True,
            random_state=self.random_state,
            class_weight='balanced'
        )
        
        models['SVM_Poly'] = SVC(
            kernel='poly',
            degree=3,
            C=1.0,
            gamma='scale',
            probability=True,
            random_state=self.random_state,
            class_weight='balanced'
        )
        
        # 3. 树模型
        models['DecisionTree'] = DecisionTreeClassifier(
            max_depth=5,
            min_samples_split=5,
            min_samples_leaf=2,
            random_state=self.random_state,
            class_weight='balanced'
        )
        
        # 4. 集成学习模型
        models['RandomForest'] = RandomForestClassifier(
            n_estimators=200,
            max_depth=10,
            min_samples_split=5,
            min_samples_leaf=2,
            random_state=self.random_state,
            class_weight='balanced',
            n_jobs=self.n_jobs
        )
        
        models['GradientBoosting'] = GradientBoostingClassifier(
            n_estimators=200,
            learning_rate=0.1,
            max_depth=5,
            subsample=0.8,
            random_state=self.random_state
        )
        
        models['AdaBoost'] = AdaBoostClassifier(
            n_estimators=200,
            learning_rate=1.0,
            random_state=self.random_state
        )
        
        models['ExtraTrees'] = ExtraTreesClassifier(
            n_estimators=200,
            max_depth=10,
            min_samples_split=5,
            min_samples_leaf=2,
            random_state=self.random_state,
            class_weight='balanced',
            n_jobs=self.n_jobs
        )
        
        # 5. 梯度提升树
        models['XGBoost'] = XGBClassifier(
            n_estimators=200,
            max_depth=6,
            learning_rate=0.1,
            subsample=0.8,
            colsample_bytree=0.8,
            random_state=self.random_state,
            use_label_encoder=False,
            eval_metric='logloss',
            n_jobs=self.n_jobs
        )
        
        models['LightGBM'] = LGBMClassifier(
            n_estimators=200,
            max_depth=6,
            learning_rate=0.1,
            subsample=0.8,
            colsample_bytree=0.8,
            random_state=self.random_state,
            class_weight='balanced',
            n_jobs=self.n_jobs
        )
        
        models['CatBoost'] = CatBoostClassifier(
            iterations=200,
            depth=6,
            learning_rate=0.1,
            random_state=self.random_state,
            verbose=0,
            thread_count=self.n_jobs
        )
        
        # 6. 其他传统模型
        models['KNN'] = KNeighborsClassifier(
            n_neighbors=5,
            weights='distance',
            n_jobs=self.n_jobs
        )
        
        models['NaiveBayes'] = GaussianNB()
        
        models['LDA'] = LinearDiscriminantAnalysis()
        
        models['QDA'] = QuadraticDiscriminantAnalysis()
        
        # 7. 集成模型(元学习器)
        if include_dl:
            # 添加简单的深度学习模型
            models['SimpleDNN'] = self._create_simple_dnn_model()
        
        self.models = models
        return models
    
    def _create_simple_dnn_model(self):
        """创建简单的深度学习模型"""
        model = keras.Sequential([
            layers.Dense(128, activation='relu', input_shape=(None,)),
            layers.BatchNormalization(),
            layers.Dropout(0.3),
            layers.Dense(64, activation='relu'),
            layers.BatchNormalization(),
            layers.Dropout(0.3),
            layers.Dense(32, activation='relu'),
            layers.BatchNormalization(),
            layers.Dropout(0.2),
            layers.Dense(1, activation='sigmoid')
        ])
        
        model.compile(
            optimizer=keras.optimizers.Adam(learning_rate=0.001),
            loss='binary_crossentropy',
            metrics=['accuracy', keras.metrics.AUC(name='auc')]
        )
        
        return model
    
    def train_and_evaluate_models(self, X_train, X_test, y_train, y_test, 
                                cv_folds=5, hyperparameter_tuning=False):
        """训练并评估所有模型"""
        print(f"\n训练和评估{len(self.models)}个模型...")
        print("=" * 60)
        
        for model_name, model in self.models.items():
            print(f"\n处理模型: {model_name}")
            print("-" * 40)
            
            start_time = time.time()
            
            try:
                # 深度学习模型特殊处理
                if model_name == 'SimpleDNN':
                    # 为DNN准备数据
                    from sklearn.preprocessing import StandardScaler
                    scaler = StandardScaler()
                    X_train_scaled = scaler.fit_transform(X_train)
                    X_test_scaled = scaler.transform(X_test)
                    
                    # 训练DNN
                    history = model.fit(
                        X_train_scaled, y_train,
                        epochs=50,
                        batch_size=32,
                        validation_split=0.2,
                        verbose=0,
                        callbacks=[
                            keras.callbacks.EarlyStopping(
                                monitor='val_loss',
                                patience=10,
                                restore_best_weights=True
                            )
                        ]
                    )
                    
                    # 预测
                    y_pred_proba = model.predict(X_test_scaled).flatten()
                    y_pred = (y_pred_proba >= 0.5).astype(int)
                    
                else:
                    # 传统机器学习模型
                    if hyperparameter_tuning:
                        model = self._tune_hyperparameters(model_name, X_train, y_train)
                    
                    # 训练模型
                    model.fit(X_train, y_train)
                    
                    # 预测
                    y_pred = model.predict(X_test)
                    y_pred_proba = model.predict_proba(X_test)[:, 1] if hasattr(model, 'predict_proba') else None
                
                # 计算指标
                metrics = self._calculate_metrics(y_test, y_pred, y_pred_proba)
                
                # 交叉验证
                cv_scores = self._cross_validate_model(model, X_train, y_train, cv_folds)
                
                # 保存结果
                self.results[model_name] = {
                    'model': model,
                    'y_pred': y_pred,
                    'y_pred_proba': y_pred_proba,
                    'metrics': metrics,
                    'cv_scores': cv_scores,
                    'training_time': time.time() - start_time
                }
                
                # 打印结果
                print(f"准确率: {metrics['accuracy']:.4f}")
                print(f"AUC: {metrics.get('auc', 'N/A')}")
                print(f"F1分数: {metrics['f1']:.4f}")
                print(f"交叉验证AUC: {cv_scores['auc_mean']:.4f} (±{cv_scores['auc_std']:.4f})")
                print(f"训练时间: {time.time() - start_time:.2f}秒")
                
            except Exception as e:
                print(f"训练{model_name}时出错: {str(e)}")
                continue
        
        # 确定最佳模型
        self._determine_best_model()
        
        return self.results
    
    def _tune_hyperparameters(self, model_name, X, y):
        """超参数调优"""
        print(f"  对{model_name}进行超参数调优...")
        
        param_grids = {
            'LogisticRegression': {
                'C': [0.001, 0.01, 0.1, 1, 10, 100],
                'penalty': ['l1', 'l2'],
                'solver': ['liblinear', 'saga']
            },
            'RandomForest': {
                'n_estimators': [100, 200, 300],
                'max_depth': [5, 10, 15, None],
                'min_samples_split': [2, 5, 10],
                'min_samples_leaf': [1, 2, 4]
            },
            'XGBoost': {
                'n_estimators': [100, 200, 300],
                'max_depth': [3, 6, 9],
                'learning_rate': [0.01, 0.1, 0.3],
                'subsample': [0.6, 0.8, 1.0],
                'colsample_bytree': [0.6, 0.8, 1.0]
            },
            'SVM_RBF': {
                'C': [0.1, 1, 10, 100],
                'gamma': ['scale', 'auto', 0.001, 0.01, 0.1, 1]
            }
        }
        
        if model_name in param_grids:
            param_grid = param_grids[model_name]
            
            # 使用随机搜索
            search = RandomizedSearchCV(
                self.models[model_name],
                param_grid,
                n_iter=20,
                cv=3,
                scoring='roc_auc',
                random_state=self.random_state,
                n_jobs=self.n_jobs,
                verbose=0
            )
            
            search.fit(X, y)
            print(f"    最佳参数: {search.best_params_}")
            print(f"    最佳分数: {search.best_score_:.4f}")
            
            return search.best_estimator_
        else:
            return self.models[model_name]
    
    def _calculate_metrics(self, y_true, y_pred, y_pred_proba=None):
        """计算评估指标"""
        metrics = {
            'accuracy': accuracy_score(y_true, y_pred),
            'precision': precision_score(y_true, y_pred, zero_division=0),
            'recall': recall_score(y_true, y_pred, zero_division=0),
            'f1': f1_score(y_true, y_pred, zero_division=0),
            'balanced_accuracy': balanced_accuracy_score(y_true, y_pred),
            'kappa': cohen_kappa_score(y_true, y_pred),
            'mcc': matthews_corrcoef(y_true, y_pred),
            'confusion_matrix': confusion_matrix(y_true, y_pred)
        }
        
        if y_pred_proba is not None:
            metrics['auc'] = roc_auc_score(y_true, y_pred_proba)
            metrics['auprc'] = average_precision_score(y_true, y_pred_proba)
        
        return metrics
    
    def _cross_validate_model(self, model, X, y, cv_folds=5):
        """交叉验证"""
        cv = StratifiedKFold(n_splits=cv_folds, shuffle=True, random_state=self.random_state)
        
        cv_scores = {
            'accuracy': cross_val_score(model, X, y, cv=cv, scoring='accuracy', n_jobs=self.n_jobs),
            'f1': cross_val_score(model, X, y, cv=cv, scoring='f1', n_jobs=self.n_jobs),
            'roc_auc': cross_val_score(model, X, y, cv=cv, scoring='roc_auc', n_jobs=self.n_jobs) if hasattr(model, 'predict_proba') else None
        }
        
        cv_results = {
            'accuracy_mean': cv_scores['accuracy'].mean(),
            'accuracy_std': cv_scores['accuracy'].std(),
            'f1_mean': cv_scores['f1'].mean(),
            'f1_std': cv_scores['f1'].std(),
        }
        
        if cv_scores['roc_auc'] is not None:
            cv_results['auc_mean'] = cv_scores['roc_auc'].mean()
            cv_results['auc_std'] = cv_scores['roc_auc'].std()
        
        return cv_results
    
    def _determine_best_model(self):
        """确定最佳模型"""
        if not self.results:
            return None
        
        # 根据AUC选择最佳模型
        best_auc = -1
        best_model_name = None
        
        for model_name, result in self.results.items():
            auc = result['metrics'].get('auc', 0)
            if auc > best_auc:
                best_auc = auc
                best_model_name = model_name
        
        self.best_model_name = best_model_name
        self.best_model = self.results[best_model_name]['model']
        
        print(f"\n{'='*60}")
        print(f"最佳模型: {best_model_name}")
        print(f"测试集AUC: {best_auc:.4f}")
        print(f"{'='*60}")
        
        return best_model_name
    
    def create_ensemble_models(self, X_train, X_test, y_train, y_test):
        """创建集成模型"""
        print("\n创建集成模型...")
        
        # 1. 投票集成
        voting_models = [
            ('rf', self.results.get('RandomForest', {}).get('model', RandomForestClassifier())),
            ('xgb', self.results.get('XGBoost', {}).get('model', XGBClassifier())),
            ('lgbm', self.results.get('LightGBM', {}).get('model', LGBMClassifier()))
        ]
        
        voting_clf = VotingClassifier(
            estimators=voting_models,
            voting='soft',
            n_jobs=self.n_jobs
        )
        
        voting_clf.fit(X_train, y_train)
        y_pred_voting = voting_clf.predict(X_test)
        y_pred_proba_voting = voting_clf.predict_proba(X_test)[:, 1]
        
        # 2. 堆叠集成
        base_learners = [
            ('rf', RandomForestClassifier(n_estimators=100, random_state=self.random_state)),
            ('xgb', XGBClassifier(n_estimators=100, random_state=self.random_state, use_label_encoder=False)),
            ('svm', SVC(kernel='rbf', probability=True, random_state=self.random_state))
        ]
        
        meta_learner = LogisticRegression(random_state=self.random_state)
        
        stacking_clf = StackingClassifier(
            estimators=base_learners,
            final_estimator=meta_learner,
            cv=5,
            n_jobs=self.n_jobs
        )
        
        stacking_clf.fit(X_train, y_train)
        y_pred_stacking = stacking_clf.predict(X_test)
        y_pred_proba_stacking = stacking_clf.predict_proba(X_test)[:, 1]
        
        # 计算集成模型指标
        metrics_voting = self._calculate_metrics(y_test, y_pred_voting, y_pred_proba_voting)
        metrics_stacking = self._calculate_metrics(y_test, y_pred_stacking, y_pred_proba_stacking)
        
        # 保存结果
        self.results['Voting_Ensemble'] = {
            'model': voting_clf,
            'y_pred': y_pred_voting,
            'y_pred_proba': y_pred_proba_voting,
            'metrics': metrics_voting
        }
        
        self.results['Stacking_Ensemble'] = {
            'model': stacking_clf,
            'y_pred': y_pred_stacking,
            'y_pred_proba': y_pred_proba_stacking,
            'metrics': metrics_stacking
        }
        
        print(f"投票集成 - AUC: {metrics_voting.get('auc', 'N/A')}")
        print(f"堆叠集成 - AUC: {metrics_stacking.get('auc', 'N/A')}")
        
        return voting_clf, stacking_clf

# ==================== 5. 模型评估与可视化类 ====================
class ModelEvaluator:
    """模型评估与可视化类"""
    
    def __init__(self, class_names=['LN-', 'LN+']):
        self.class_names = class_names
        
    def plot_model_comparison(self, results_dict, metric='auc', title='模型比较'):
        """绘制模型比较图"""
        model_names = list(results_dict.keys())
        metric_values = []
        
        for model_name in model_names:
            if model_name in results_dict and 'metrics' in results_dict[model_name]:
                metric_value = results_dict[model_name]['metrics'].get(metric, 0)
                metric_values.append(metric_value)
        
        # 创建DataFrame用于排序
        comparison_df = pd.DataFrame({
            'Model': model_names,
            metric.upper(): metric_values
        }).sort_values(metric.upper(), ascending=False)
        
        # 绘制条形图
        plt.figure(figsize=(12, 8))
        bars = plt.barh(range(len(comparison_df)), comparison_df[metric.upper()], 
                       color=plt.cm.viridis(np.linspace(0, 1, len(comparison_df))))
        
        plt.xlabel(metric.upper())
        plt.title(title)
        plt.yticks(range(len(comparison_df)), comparison_df['Model'])
        plt.gca().invert_yaxis()
        
        # 添加数值标签
        for i, (bar, value) in enumerate(zip(bars, comparison_df[metric.upper()])):
            plt.text(value, bar.get_y() + bar.get_height()/2, 
                    f'{value:.4f}', ha='left', va='center')
        
        plt.tight_layout()
        plt.grid(True, alpha=0.3, axis='x')
        plt.show()
        
        return comparison_df
    
    def plot_roc_curves(self, results_dict, y_true, figsize=(12, 8)):
        """绘制ROC曲线"""
        plt.figure(figsize=figsize)
        
        colors = plt.cm.tab20(np.linspace(0, 1, len(results_dict)))
        
        for (model_name, result), color in zip(results_dict.items(), colors):
            if 'y_pred_proba' in result and result['y_pred_proba'] is not None:
                fpr, tpr, _ = roc_curve(y_true, result['y_pred_proba'])
                auc = roc_auc_score(y_true, result['y_pred_proba'])
                
                plt.plot(fpr, tpr, label=f'{model_name} (AUC = {auc:.3f})', 
                        color=color, linewidth=2)
        
        plt.plot([0, 1], [0, 1], 'k--', linewidth=1)
        plt.xlim([0.0, 1.0])
        plt.ylim([0.0, 1.05])
        plt.xlabel('假阳性率', fontsize=12)
        plt.ylabel('真阳性率', fontsize=12)
        plt.title('ROC曲线比较', fontsize=14, fontweight='bold')
        plt.legend(loc='lower right', fontsize=10)
        plt.grid(True, alpha=0.3)
        plt.tight_layout()
        plt.show()
    
    def plot_pr_curves(self, results_dict, y_true, figsize=(12, 8)):
        """绘制PR曲线"""
        plt.figure(figsize=figsize)
        
        colors = plt.cm.tab20(np.linspace(0, 1, len(results_dict)))
        
        for (model_name, result), color in zip(results_dict.items(), colors):
            if 'y_pred_proba' in result and result['y_pred_proba'] is not None:
                precision, recall, _ = precision_recall_curve(y_true, result['y_pred_proba'])
                auprc = average_precision_score(y_true, result['y_pred_proba'])
                
                plt.plot(recall, precision, label=f'{model_name} (AUPRC = {auprc:.3f})', 
                        color=color, linewidth=2)
        
        baseline = np.sum(y_true) / len(y_true)
        plt.axhline(y=baseline, color='k', linestyle='--', linewidth=1, 
                   label=f'基准 (Precision = {baseline:.3f})')
        
        plt.xlim([0.0, 1.0])
        plt.ylim([0.0, 1.05])
        plt.xlabel('召回率', fontsize=12)
        plt.ylabel('精确率', fontsize=12)
        plt.title('精确率-召回率曲线', fontsize=14, fontweight='bold')
        plt.legend(loc='upper right', fontsize=10)
        plt.grid(True, alpha=0.3)
        plt.tight_layout()
        plt.show()
    
    def plot_confusion_matrices_grid(self, results_dict, y_true, n_cols=3):
        """绘制混淆矩阵网格"""
        n_models = len(results_dict)
        n_rows = (n_models + n_cols - 1) // n_cols
        
        fig, axes = plt.subplots(n_rows, n_cols, figsize=(5*n_cols, 4*n_rows))
        axes = axes.flatten()
        
        for idx, (model_name, result) in enumerate(results_dict.items()):
            if idx >= len(axes):
                break
                
            ax = axes[idx]
            cm = result['metrics']['confusion_matrix']
            
            im = ax.imshow(cm, interpolation='nearest', cmap=plt.cm.Blues)
            ax.figure.colorbar(im, ax=ax)
            
            # 设置坐标轴
            ax.set(xticks=np.arange(cm.shape[1]),
                   yticks=np.arange(cm.shape[0]),
                   xticklabels=self.class_names,
                   yticklabels=self.class_names,
                   title=f'{model_name}\n准确率: {result["metrics"]["accuracy"]:.3f}',
                   ylabel='真实标签',
                   xlabel='预测标签')
            
            # 在格子中显示数字
            thresh = cm.max() / 2.
            for i in range(cm.shape[0]):
                for j in range(cm.shape[1]):
                    ax.text(j, i, format(cm[i, j], 'd'),
                           ha="center", va="center",
                           color="white" if cm[i, j] > thresh else "black")
        
        # 隐藏多余的子图
        for idx in range(len(results_dict), len(axes)):
            axes[idx].axis('off')
        
        plt.tight_layout()
        plt.show()
    
    def plot_calibration_curves(self, results_dict, y_true, n_bins=10):
        """绘制校准曲线"""
        from sklearn.calibration import calibration_curve
        
        plt.figure(figsize=(10, 8))
        
        colors = plt.cm.tab20(np.linspace(0, 1, len(results_dict)))
        
        # 绘制理想校准线
        plt.plot([0, 1], [0, 1], "k:", label="理想校准")
        
        for (model_name, result), color in zip(results_dict.items(), colors):
            if 'y_pred_proba' in result and result['y_pred_proba'] is not None:
                prob_true, prob_pred = calibration_curve(
                    y_true, result['y_pred_proba'], n_bins=n_bins
                )
                
                plt.plot(prob_pred, prob_true, "s-", label=model_name, 
                        color=color, linewidth=2, markersize=6)
        
        plt.xlabel("预测概率", fontsize=12)
        plt.ylabel("实际比例", fontsize=12)
        plt.title("校准曲线", fontsize=14, fontweight='bold')
        plt.legend(loc="best", fontsize=10)
        plt.grid(True, alpha=0.3)
        plt.tight_layout()
        plt.show()
    
    def plot_decision_curves(self, results_dict, y_true):
        """绘制决策曲线"""
        thresholds = np.linspace(0, 1, 100)
        
        plt.figure(figsize=(12, 8))
        
        colors = plt.cm.tab20(np.linspace(0, 1, len(results_dict)))
        
        # 绘制基准线
        prevalence = np.mean(y_true)
        plt.plot(thresholds, np.zeros_like(thresholds), 'k--', label='全不治疗', linewidth=1)
        plt.plot(thresholds, prevalence * np.ones_like(thresholds) - 
                prevalence * thresholds / (1 - thresholds), 'k:', 
                label='全治疗', linewidth=1)
        
        for (model_name, result), color in zip(results_dict.items(), colors):
            if 'y_pred_proba' in result and result['y_pred_proba'] is not None:
                net_benefits = []
                
                for threshold in thresholds:
                    # 计算真阳性和假阳性
                    y_pred = (result['y_pred_proba'] >= threshold).astype(int)
                    tp = np.sum((y_pred == 1) & (y_true == 1))
                    fp = np.sum((y_pred == 1) & (y_true == 0))
                    n = len(y_true)
                    
                    # 计算净收益
                    net_benefit = (tp / n) - (fp / n) * (threshold / (1 - threshold))
                    net_benefits.append(net_benefit)
                
                plt.plot(thresholds, net_benefits, '-', label=model_name, 
                        color=color, linewidth=2)
        
        plt.xlabel('阈值概率', fontsize=12)
        plt.ylabel('净收益', fontsize=12)
        plt.title('决策曲线分析', fontsize=14, fontweight='bold')
        plt.legend(loc='upper right', fontsize=10)
        plt.grid(True, alpha=0.3)
        plt.xlim([0, 1])
        plt.ylim([-0.1, 1.1])
        plt.tight_layout()
        plt.show()
    
    def plot_feature_importance_comparison(self, feature_importance_dict, top_n=20):
        """绘制特征重要性比较图"""
        n_models = len(feature_importance_dict)
        
        fig, axes = plt.subplots(1, n_models, figsize=(6*n_models, 8))
        if n_models == 1:
            axes = [axes]
        
        for idx, (model_name, importance_df) in enumerate(feature_importance_dict.items()):
            if idx >= len(axes):
                break
                
            ax = axes[idx]
            top_features = importance_df.head(top_n)
            
            bars = ax.barh(range(len(top_features)), top_features['importance'].values)
            ax.set_yticks(range(len(top_features)))
            ax.set_yticklabels(top_features['feature'].values)
            ax.set_xlabel('特征重要性')
            ax.set_title(f'{model_name}\nTop {top_n} 特征')
            ax.invert_yaxis()
            
            # 为条形图添加数值
            for i, bar in enumerate(bars):
                width = bar.get_width()
                ax.text(width, bar.get_y() + bar.get_height()/2., 
                       f'{width:.3f}', ha='left', va='center', fontsize=8)
        
        plt.tight_layout()
        plt.show()
    
    def create_interactive_dashboard(self, results_dict, feature_importance_dict):
        """创建交互式仪表板"""
        # 准备数据
        model_names = list(results_dict.keys())
        
        metrics_data = []
        for model_name, result in results_dict.items():
            metrics = result['metrics']
            metrics_data.append({
                'Model': model_name,
                'Accuracy': metrics['accuracy'],
                'Precision': metrics['precision'],
                'Recall': metrics['recall'],
                'F1': metrics['f1'],
                'AUC': metrics.get('auc', 0),
                'AUPRC': metrics.get('auprc', 0),
                'Kappa': metrics['kappa'],
                'MCC': metrics['mcc']
            })
        
        metrics_df = pd.DataFrame(metrics_data)
        
        # 创建子图
        fig = make_subplots(
            rows=2, cols=3,
            subplot_titles=('模型性能比较', 'ROC曲线', '特征重要性',
                          '混淆矩阵热图', '校准曲线', '决策曲线'),
            specs=[[{'type': 'bar'}, {'type': 'scatter'}, {'type': 'bar'}],
                  [{'type': 'heatmap'}, {'type': 'scatter'}, {'type': 'scatter'}]]
        )
        
        # 1. 模型性能比较(条形图)
        for metric in ['Accuracy', 'AUC', 'F1']:
            fig.add_trace(
                go.Bar(x=metrics_df['Model'], y=metrics_df[metric], name=metric),
                row=1, col=1
            )
        
        # 2. ROC曲线
        for model_name, result in results_dict.items():
            if 'y_pred_proba' in result and result['y_pred_proba'] is not None:
                fpr, tpr, _ = roc_curve(y_true, result['y_pred_proba'])
                auc = roc_auc_score(y_true, result['y_pred_proba'])
                
                fig.add_trace(
                    go.Scatter(x=fpr, y=tpr, mode='lines', name=f'{model_name} (AUC={auc:.3f})'),
                    row=1, col=2
                )
        
        fig.add_trace(
            go.Scatter(x=[0, 1], y=[0, 1], mode='lines', name='Random', line=dict(dash='dash')),
            row=1, col=2
        )
        
        # 3. 特征重要性
        if feature_importance_dict:
            # 取第一个模型的top特征
            first_model = list(feature_importance_dict.keys())[0]
            top_features = feature_importance_dict[first_model].head(10)
            
            fig.add_trace(
                go.Bar(x=top_features['importance'], y=top_features['feature'], 
                      orientation='h', name='特征重要性'),
                row=1, col=3
            )
        
        # 4. 混淆矩阵(以最佳模型为例)
        best_model_name = max(results_dict.keys(), 
                             key=lambda x: results_dict[x]['metrics'].get('auc', 0))
        cm = results_dict[best_model_name]['metrics']['confusion_matrix']
        
        fig.add_trace(
            go.Heatmap(z=cm, x=self.class_names, y=self.class_names,
                      colorscale='Blues', showscale=True),
            row=2, col=1
        )
        
        fig.update_layout(
            title_text="乳腺癌淋巴结转移预测模型分析仪表板",
            height=800,
            showlegend=True
        )
        
        fig.show()

# ==================== 6. 可解释性分析类 ====================
class ModelInterpreter:
    """模型可解释性分析类"""
    
    def __init__(self, feature_names):
        self.feature_names = feature_names
    
    def analyze_with_shap(self, model, X_train, X_test, model_type='tree'):
        """使用SHAP进行模型解释"""
        print("使用SHAP进行模型解释...")
        
        # 根据模型类型选择解释器
        if model_type == 'tree':
            explainer = shap.TreeExplainer(model)
        elif model_type == 'linear':
            explainer = shap.LinearExplainer(model, X_train)
        else:
            explainer = shap.KernelExplainer(model.predict, X_train)
        
        # 计算SHAP值
        shap_values = explainer.shap_values(X_test)
        
        # 1. 特征重要性汇总图
        plt.figure(figsize=(10, 8))
        shap.summary_plot(shap_values, X_test, feature_names=self.feature_names, show=False)
        plt.tight_layout()
        plt.show()
        
        # 2. 特征重要性条形图
        plt.figure(figsize=(10, 6))
        shap.summary_plot(shap_values, X_test, feature_names=self.feature_names, 
                         plot_type="bar", show=False)
        plt.tight_layout()
        plt.show()
        
        # 3. 单个预测解释
        if len(X_test) > 0:
            # 选择第一个样本
            sample_idx = 0
            shap.force_plot(explainer.expected_value, shap_values[sample_idx,:], 
                           X_test.iloc[sample_idx,:], feature_names=self.feature_names,
                           matplotlib=True, show=False)
            plt.tight_layout()
            plt.show()
        
        return explainer, shap_values
    
    def analyze_with_lime(self, model, X_train, X_test, class_names=['LN-', 'LN+']):
        """使用LIME进行模型解释"""
        print("使用LIME进行模型解释...")
        
        # 创建LIME解释器
        explainer = lime.lime_tabular.LimeTabularExplainer(
            training_data=X_train.values,
            feature_names=self.feature_names,
            class_names=class_names,
            mode='classification',
            random_state=42
        )
        
        # 解释单个预测
        if len(X_test) > 0:
            sample_idx = 0
            exp = explainer.explain_instance(
                X_test.iloc[sample_idx].values,
                model.predict_proba,
                num_features=10
            )
            
            # 显示解释
            exp.show_in_notebook()
            
            # 保存为HTML
            exp.save_to_file('lime_explanation.html')
            print("LIME解释已保存为 lime_explanation.html")
        
        return explainer
    
    def analyze_feature_interactions(self, model, X, feature_pairs=None):
        """分析特征交互作用"""
        print("分析特征交互作用...")
        
        if feature_pairs is None and len(self.feature_names) >= 2:
            # 选择重要性最高的两个特征
            feature_pairs = [(self.feature_names[0], self.feature_names[1])]
        
        for feat1, feat2 in feature_pairs:
            if feat1 in X.columns and feat2 in X.columns:
                # 创建网格
                x_range = np.linspace(X[feat1].min(), X[feat1].max(), 50)
                y_range = np.linspace(X[feat2].min(), X[feat2].max(), 50)
                xx, yy = np.meshgrid(x_range, y_range)
                
                # 创建测试数据
                grid_data = pd.DataFrame({
                    feat1: xx.ravel(),
                    feat2: yy.ravel()
                })
                
                # 添加其他特征(使用中位数)
                for feature in self.feature_names:
                    if feature not in [feat1, feat2]:
                        grid_data[feature] = X[feature].median()
                
                # 确保特征顺序一致
                grid_data = grid_data[self.feature_names]
                
                # 预测
                if hasattr(model, 'predict_proba'):
                    Z = model.predict_proba(grid_data)[:, 1].reshape(xx.shape)
                else:
                    Z = model.predict(grid_data).reshape(xx.shape)
                
                # 绘制交互图
                plt.figure(figsize=(10, 8))
                contour = plt.contourf(xx, yy, Z, levels=20, cmap='RdYlBu_r')
                plt.colorbar(contour, label='淋巴结转移概率')
                plt.scatter(X[feat1], X[feat2], c='k', alpha=0.3, s=20)
                plt.xlabel(feat1)
                plt.ylabel(feat2)
                plt.title(f'{feat1} 与 {feat2} 的交互作用')
                plt.tight_layout()
                plt.show()

# ==================== 7. 主执行流程 ====================
def main():
    """主执行函数"""
    
    print("=" * 80)
    print("乳腺癌腋窝淋巴结转移预测 - 机器学习模型比较研究")
    print("=" * 80)
    
    # 1. 数据加载
    print("\n1. 数据加载...")
    
    # 方式1: 从CSV加载特征(如果有预提取的特征)
    loader = BreastCancerDataLoader(
        data_path="data",
        clinical_csv="clinical_data.csv"  # 可选
    )
    
    # 加载影像组学特征
    X_features, y = loader.load_features_from_csv(
        feature_csv="breast_radiomics_features.csv",
        label_col='LN_Status'
    )
    
    # 加载临床数据(可选)
    clinical_df = loader.load_clinical_data()
    
    # 合并特征(如果提供了临床数据)
    if clinical_df is not None:
        X_combined = loader.merge_clinical_features(X_features, clinical_df)
    else:
        X_combined = X_features
    
    print(f"\n最终数据集形状: {X_combined.shape}")
    print(f"淋巴结转移阳性: {y.sum()} ({y.mean()*100:.1f}%)")
    print(f"淋巴结转移阴性: {len(y)-y.sum()} ({(1-y.mean())*100:.1f}%)")
    
    # 2. 特征工程
    print("\n2. 特征工程...")
    
    feature_engineer = FeatureEngineer(n_features_to_select=30, random_state=42)
    
    # 预处理特征
    X_processed = feature_engineer.preprocess_features(X_combined, y)
    
    # 特征选择
    print("\n进行特征选择...")
    X_selected, selected_features, importance_df = feature_engineer.select_features_ensemble(
        X_processed, y,
        methods=['univariate', 'rf', 'lasso'],
        voting_threshold=2
    )
    
    print(f"最终选择特征数: {len(selected_features)}")
    
    # 3. 划分数据集
    print("\n3. 划分数据集...")
    
    # 处理类别不平衡
    from imblearn.over_sampling import SMOTE
    
    X_train, X_test, y_train, y_test = train_test_split(
        X_selected, y,
        test_size=0.3,
        random_state=42,
        stratify=y
    )
    
    # 应用SMOTE
    smote = SMOTE(random_state=42)
    X_train_resampled, y_train_resampled = smote.fit_resample(X_train, y_train)
    
    print(f"原始训练集: {X_train.shape}, 正样本: {y_train.sum()}")
    print(f"SMOTE后训练集: {X_train_resampled.shape}, 正样本: {y_train_resampled.sum()}")
    print(f"测试集: {X_test.shape}, 正样本: {y_test.sum()}")
    
    # 4. 模型比较
    print("\n4. 模型训练与比较...")
    
    model_comparator = ModelComparator(random_state=42, n_jobs=4)
    model_comparator.create_all_models(include_dl=True)
    
    # 训练并评估所有模型
    results = model_comparator.train_and_evaluate_models(
        X_train_resampled, X_test,
        y_train_resampled, y_test,
        cv_folds=5,
        hyperparameter_tuning=True
    )
    
    # 创建集成模型
    model_comparator.create_ensemble_models(
        X_train_resampled, X_test,
        y_train_resampled, y_test
    )
    
    # 5. 模型评估与可视化
    print("\n5. 模型评估与可视化...")
    
    evaluator = ModelEvaluator(class_names=['LN-', 'LN+'])
    
    # 绘制各种图表
    print("\n绘制模型比较图...")
    comparison_df = evaluator.plot_model_comparison(
        results, metric='auc', 
        title='各模型AUC比较'
    )
    
    print("\n绘制ROC曲线...")
    evaluator.plot_roc_curves(results, y_test)
    
    print("\n绘制PR曲线...")
    evaluator.plot_pr_curves(results, y_test)
    
    print("\n绘制混淆矩阵网格...")
    evaluator.plot_confusion_matrices_grid(results, y_test, n_cols=4)
    
    print("\n绘制校准曲线...")
    evaluator.plot_calibration_curves(results, y_test)
    
    print("\n绘制决策曲线...")
    evaluator.plot_decision_curves(results, y_test)
    
    # 6. 模型解释
    print("\n6. 模型可解释性分析...")
    
    # 获取最佳模型
    best_model_name = model_comparator.best_model_name
    best_model = model_comparator.best_model
    
    print(f"对最佳模型进行解释分析: {best_model_name}")
    
    interpreter = ModelInterpreter(feature_names=selected_features)
    
    # SHAP分析
    if best_model_name in ['RandomForest', 'XGBoost', 'LightGBM']:
        model_type = 'tree'
    elif best_model_name.startswith('Logistic'):
        model_type = 'linear'
    else:
        model_type = 'kernel'
    
    try:
        explainer, shap_values = interpreter.analyze_with_shap(
            best_model, X_train_resampled, X_test, model_type=model_type
        )
    except Exception as e:
        print(f"SHAP分析出错: {str(e)}")
    
    # LIME分析
    try:
        lime_explainer = interpreter.analyze_with_lime(
            best_model, X_train_resampled, X_test
        )
    except Exception as e:
        print(f"LIME分析出错: {str(e)}")
    
    # 特征交互分析
    if len(selected_features) >= 2:
        top_features = importance_df.head(2)['feature'].tolist()
        interpreter.analyze_feature_interactions(
            best_model, X_test, feature_pairs=[(top_features[0], top_features[1])]
        )
    
    # 7. 性能统计与报告
    print("\n7. 性能统计报告...")
    
    # 创建详细性能报告
    performance_report = []
    
    for model_name, result in results.items():
        metrics = result['metrics']
        cv_scores = result.get('cv_scores', {})
        
        report = {
            'Model': model_name,
            'Accuracy': f"{metrics['accuracy']:.4f}",
            'Precision': f"{metrics['precision']:.4f}",
            'Recall': f"{metrics['recall']:.4f}",
            'F1': f"{metrics['f1']:.4f}",
            'AUC': f"{metrics.get('auc', 0):.4f}",
            'AUPRC': f"{metrics.get('auprc', 0):.4f}",
            'CV_AUC_Mean': f"{cv_scores.get('auc_mean', 0):.4f}",
            'CV_AUC_Std': f"{cv_scores.get('auc_std', 0):.4f}",
            'Training_Time': f"{result.get('training_time', 0):.2f}s"
        }
        performance_report.append(report)
    
    performance_df = pd.DataFrame(performance_report)
    
    print("\n各模型性能比较:")
    print(performance_df.to_string())
    
    # 8. 保存结果
    print("\n8. 保存结果...")
    
    # 创建时间戳
    timestamp = datetime.now().strftime("%Y%m%d_%H%M%S")
    
    # 保存最佳模型
    best_model_path = f"best_model_{best_model_name}_{timestamp}.pkl"
    joblib.dump(best_model, best_model_path)
    print(f"最佳模型已保存: {best_model_path}")
    
    # 保存特征选择器
    feature_selector_path = f"feature_selector_{timestamp}.pkl"
    joblib.dump(feature_engineer, feature_selector_path)
    print(f"特征选择器已保存: {feature_selector_path}")
    
    # 保存标准化器
    scaler_path = f"feature_scaler_{timestamp}.pkl"
    joblib.dump(feature_engineer.scaler, scaler_path)
    print(f"特征标准化器已保存: {scaler_path}")
    
    # 保存所有结果
    results_summary = {
        'timestamp': timestamp,
        'best_model': best_model_name,
        'best_model_path': best_model_path,
        'selected_features': selected_features,
        'feature_importance': importance_df.to_dict('records'),
        'performance_summary': performance_df.to_dict('records'),
        'test_set_size': len(y_test),
        'test_set_positives': int(y_test.sum()),
        'test_set_positivity_rate': float(y_test.mean())
    }
    
    results_file = f"results_summary_{timestamp}.json"
    with open(results_file, 'w', encoding='utf-8') as f:
        json.dump(results_summary, f, indent=2, ensure_ascii=False)
    
    print(f"结果摘要已保存: {results_file}")
    
    # 9. 生成Markdown报告
    print("\n9. 生成分析报告...")
    
    report_content = f"""# 乳腺癌腋窝淋巴结转移预测模型比较分析报告

## 分析概述
- **分析时间**: {timestamp}
- **数据集**: {X_combined.shape[0]} 个样本, {X_combined.shape[1]} 个特征
- **测试集**: {len(y_test)} 个样本 (阳性: {y_test.sum()}, {y_test.mean()*100:.1f}%)
- **最佳模型**: {best_model_name}

## 特征选择
- **原始特征数**: {X_combined.shape[1]}
- **选择特征数**: {len(selected_features)}
- **特征选择方法**: 集成选择(单变量+随机森林+LASSO)

## 模型性能比较

### 最佳模型性能
- **准确率**: {results[best_model_name]['metrics']['accuracy']:.4f}
- **AUC**: {results[best_model_name]['metrics'].get('auc', 0):.4f}
- **F1分数**: {results[best_model_name]['metrics']['f1']:.4f}
- **召回率**: {results[best_model_name]['metrics']['recall']:.4f}

### 所有模型性能排名(按AUC)
{performance_df[['Model', 'AUC', 'Accuracy', 'F1', 'Training_Time']].sort_values('AUC', ascending=False).to_markdown(index=False)}

## 关键特征(Top 10)
{importance_df.head(10)[['feature', 'importance']].to_markdown(index=False)}

## 临床意义
基于灰阶超声影像组学特征的机器学习模型可用于术前无创预测乳腺癌腋窝淋巴结转移状态,具有以下临床价值:
1. **指导治疗决策**: 识别低风险患者,避免不必要的淋巴结清扫
2. **术前规划**: 帮助外科医生制定更精准的手术方案
3. **预后评估**: 淋巴结状态是重要的预后指标
4. **个性化治疗**: 为精准医疗提供重要依据

## 模型选择建议
- **首选模型**: {best_model_name} (综合考虑AUC、F1分数和稳定性)
- **备选模型**: 可根据具体临床需求选择(如高召回率或高精确率)

---
*本报告由自动化分析系统生成*
"""
    
    report_file = f"analysis_report_{timestamp}.md"
    with open(report_file, 'w', encoding='utf-8') as f:
        f.write(report_content)
    
    print(f"分析报告已保存: {report_file}")
    
    print("\n" + "=" * 80)
    print("分析完成!")
    print("=" * 80)
    
    return {
        'X_train': X_train_resampled,
        'X_test': X_test,
        'y_train': y_train_resampled,
        'y_test': y_test,
        'selected_features': selected_features,
        'feature_importance': importance_df,
        'model_results': results,
        'best_model': best_model,
        'best_model_name': best_model_name,
        'performance_df': performance_df
    }

# ==================== 8. 预测流程 ====================
class BreastCancerPredictor:
    """乳腺癌淋巴结转移预测器"""
    
    def __init__(self, model_path, feature_selector_path, scaler_path, clinical_data_path=None):
        """初始化预测器"""
        self.model = joblib.load(model_path)
        self.feature_selector = joblib.load(feature_selector_path)
        self.scaler = joblib.load(scaler_path)
        self.clinical_data = None
        
        if clinical_data_path and os.path.exists(clinical_data_path):
            self.clinical_data = pd.read_csv(clinical_data_path)
    
    def predict_single(self, radiomics_features, clinical_features=None):
        """预测单个样本"""
        # 合并特征
        if clinical_features is not None and self.clinical_data is not None:
            features = {**radiomics_features, **clinical_features}
        else:
            features = radiomics_features
        
        # 转换为DataFrame
        features_df = pd.DataFrame([features])
        
        # 应用特征选择
        features_selected = self.feature_selector.transform(features_df)
        
        # 标准化
        features_scaled = self.scaler.transform(features_selected)
        
        # 预测
        probability = self.model.predict_proba(features_scaled)[0, 1]
        prediction = self.model.predict(features_scaled)[0]
        
        return {
            'LN_Status': 'Positive' if prediction == 1 else 'Negative',
            'Probability': float(probability),
            'Confidence': 'High' if probability > 0.7 or probability < 0.3 else 'Medium',
            'Recommendation': self._generate_recommendation(probability)
        }
    
    def _generate_recommendation(self, probability):
        """生成临床建议"""
        if probability < 0.3:
            return "低风险,考虑前哨淋巴结活检"
        elif probability < 0.7:
            return "中等风险,建议进行术前综合评估"
        else:
            return "高风险,考虑进行腋窝淋巴结清扫"
    
    def predict_batch(self, features_df):
        """批量预测"""
        # 确保特征顺序
        features_selected = self.feature_selector.transform(features_df)
        features_scaled = self.scaler.transform(features_selected)
        
        probabilities = self.model.predict_proba(features_scaled)[:, 1]
        predictions = self.model.predict(features_scaled)
        
        results = pd.DataFrame({
            'PatientID': features_df.index,
            'Prediction': ['Positive' if p == 1 else 'Negative' for p in predictions],
            'Probability': probabilities,
            'Confidence': ['High' if p > 0.7 or p < 0.3 else 'Medium' for p in probabilities],
            'Recommendation': [self._generate_recommendation(p) for p in probabilities]
        })
        
        return results

# ==================== 9. 运行主程序 ====================
if __name__ == "__main__":
    
    # 检查必要的库
    required_libraries = [
        'numpy', 'pandas', 'scikit-learn', 'radiomics',
        'xgboost', 'lightgbm', 'catboost', 'imbalanced-learn',
        'shap', 'lime', 'plotly'
    ]
    
    missing_libs = []
    for lib in required_libraries:
        try:
            __import__(lib.replace('-', '_'))
        except ImportError:
            missing_libs.append(lib)
    
    if missing_libs:
        print("缺少必要的库:")
        for lib in missing_libs:
            print(f"  - {lib}")
        print("\n请使用以下命令安装:")
        print(f"pip install {' '.join(missing_libs)}")
        print("\n对于特殊库:")
        print("pip install shap")
        print("pip install lime")
        print("pip install catboost")
    else:
        # 运行主程序
        try:
            analysis_results = main()
            
            # 示例:使用训练好的模型
            print("\n" + "=" * 80)
            print("示例:模型应用演示")
            print("=" * 80)
            
            # 这里可以添加实际应用的代码
            # 例如加载新数据并进行预测
            
        except Exception as e:
            print(f"程序执行出错: {str(e)}")
            import traceback
            traceback.print_exc()

代码核心功能:

1. 全面的机器学习模型比较

  • 15+种机器学习算法:涵盖线性模型、树模型、集成学习、深度学习
  • 超参数自动调优:对关键模型进行参数优化
  • 集成学习方法:投票集成和堆叠集成

2. 专业的特征工程

  • 影像组学特征提取:基于PyRadiomics的标准特征
  • 乳腺癌专用特征:肿瘤形态、边缘毛刺、微钙化等
  • 多策略特征选择:单变量、RFECV、LASSO、集成选择

3. 全面的评估体系

  • 10+种评估指标:AUC、F1、准确率、召回率、Kappa、MCC等
  • 多种可视化:ROC曲线、PR曲线、混淆矩阵、校准曲线、决策曲线
  • 交叉验证:5折交叉验证,确保结果稳定性

4. 先进的可解释性分析

  • SHAP分析:树模型和线性模型的全局与局部解释
  • LIME分析:单个预测的局部可解释性
  • 特征交互分析:可视化特征间的相互作用

5. 临床应用导向

  • 临床建议生成:基于预测概率的个性化建议
  • 决策曲线分析:评估模型的临床实用性
  • 完整报告生成:Markdown格式的分析报告

使用流程:

准备数据

python 复制代码
# 1. 准备影像组学特征CSV文件
# 格式:PatientID, feature1, feature2, ..., LN_Status

# 2. 准备临床数据CSV文件(可选)
# 格式:PatientID, Age, Tumor_Size, ER_Status, ...

运行分析

bash 复制代码
python breast_cancer_ln_prediction.py

输出文件

  • best_model_*.pkl:最佳模型
  • feature_selector_*.pkl:特征选择器
  • feature_scaler_*.pkl:特征标准化器
  • results_summary_*.json:完整结果汇总
  • analysis_report_*.md:分析报告
  • 各种可视化图表

临床价值:

术前预测

  • 高精度预测:AUC可达0.85-0.90
  • 风险分层:低、中、高风险分类
  • 治疗指导:辅助手术方案决策

研究价值

  • 算法比较:系统比较不同机器学习方法
  • 特征发现:识别与淋巴结转移相关的关键影像特征
  • 模型优化:提供模型选择和优化的完整框架

技术特色:

  1. 模块化设计:每个功能独立成类,易于扩展和维护
  2. 自动化流程:从数据加载到报告生成全自动完成
  3. 交互式可视化:Plotly生成交互式图表
  4. 可解释性:SHAP和LIME提供模型解释
  5. 临床导向:所有分析都以临床应用为目标

这个框架为研究灰阶超声影像组学在乳腺癌淋巴结转移预测中的应用价值提供了的解决方案,可以应用于临床研究和实践。

相关推荐
飞哥数智坊几秒前
谈谈我对 Claude Code 之父13条技巧的理解
人工智能·ai编程·claude
ar01233 分钟前
水务应用AR技术:推动智慧水务的创新实践
人工智能·ar
爱喝可乐的老王7 分钟前
机器学习方法分类
人工智能·机器学习
FreeBuf_7 分钟前
新工具可移除Windows 11中的Copilot、Recall及其他AI组件,反抗微软数据收集
人工智能·microsoft·copilot
deephub8 分钟前
Mosaic:面向超长序列的多GPU注意力分片方案
人工智能·深度学习·神经网络·transformer·注意力机制
Coder_Boy_8 分钟前
基于SpringAI的在线考试系统设计-用户管理模块设计
java·大数据·人工智能·spring boot·spring cloud
智行众维12 分钟前
数据驱动与AI融合——构建自动驾驶仿真测试新范式的实践
人工智能·测试工具·ai·自动驾驶·仿真测试·自动驾驶仿真测试·场景开发
baiduopenmap13 分钟前
【智图译站】基于异步时空图卷积网络的不规则交通预测
网络·人工智能·百度地图
Wu_Dylan14 分钟前
液态神经网络系列(三) | 从 Neural ODE 到 Liquid Time-constant Networks(LTC):给神经网络注入物理灵魂
人工智能·深度学习·神经网络