Kaggle Intermediate ML Part Four——Cross-Validation

What is it?

Cross-validation is a technique used to evaluate the generalizability of a machine learning model. In simpler terms, it helps you understand how well your model will perform on unseen data, which is crucial for real-world applications.

Here's how it works:

  1. Split the data: Your original dataset is divided into folds (usually equally sized).
  2. Train-Test Split: In each fold, one fold is kept for testing (hold-out set), while the remaining folds are used for training the model.
  3. Evaluate and Repeat: The model is trained on the training data and evaluated on the hold-out set. This process is repeated for each fold, ensuring every data point is used for both training and testing.
  4. Combine and Analyze: The performance metrics (e.g., accuracy, precision, recall) from each fold are combined to get an overall estimate of the model's performance on unseen data.

Common Cross-Validation Techniques:

  • K-Fold Cross-validation: The data is split into k folds, and the training-testing process is repeated k times.
  • Stratified K-Fold: Similar to k-fold, but ensures each fold has a similar distribution of class labels (important for imbalanced datasets).
  • Leave-One-Out Cross-validation (LOOCV): Each data point is used as the testing set once, while all other points are used for training. This is computationally expensive for large datasets.

Production Use and Examples:

  • Model Selection: Compare different models and choose the one with the best cross-validation performance.
  • Hyperparameter Tuning: Optimize hyperparameters (model settings) by evaluating their impact on cross-validation performance.
  • Feature Selection: Identify and remove irrelevant or redundant features that may lead to overfitting.

python 复制代码
from sklearn.model_selection import KFold
from sklearn.datasets import load_iris
from sklearn.linear_model import LogisticRegression
from sklearn.metrics import roc_auc_score

# Load the Iris dataset
iris = load_iris()
X = iris.data
y = iris.target

# Define the model
model = LogisticRegression()

# Define the K-Fold cross-validation strategy
kfold = KFold(n_splits=5, shuffle=True, random_state=42)

# Track performance metrics
auc_scores = []

# Iterate through each fold
for train_index, test_index in kfold.split(X):
    # Split data into training and testing sets
    X_train, X_test = X[train_index], X[test_index]
    y_train, y_test = y[train_index], y[test_index]

    # Train the model on the training data
    model.fit(X_train, y_train)

    # Make predictions on the testing data
    y_proba = model.predict_proba(X_test)[:, 1]  # Probability of positive class

    # Calculate AUC
    auc = roc_auc_score(y_test, y_proba)
    auc_scores.append(auc)

# Print the average AUC across all folds
print(f"Average AUC: {sum(auc_scores) / len(auc_scores):.2f}")
相关推荐
明月照山海-10 小时前
机器学习周报四十一
人工智能·机器学习
MOON404☾12 小时前
Chapter 001. Machine Learning Fundamentals
人工智能·机器学习
AC赳赳老秦12 小时前
程序员面试:OpenClaw生成面试题、模拟面试,高效备战面试
人工智能·python·机器学习·面试·职场和发展·deepseek·openclaw
源码之家15 小时前
计算机毕业设计:Python城市天气数据挖掘与预测系统 Flask框架 随机森林 K-Means 可视化 数据分析 大数据 机器学习 深度学习(建议收藏)✅
人工智能·爬虫·python·深度学习·机器学习·数据挖掘·课程设计
配奇17 小时前
集成学习(Ensemble Learning)
人工智能·机器学习·集成学习
DeepModel17 小时前
通俗易懂讲透 EM 算法(期望最大化)
人工智能·python·算法·机器学习
OpenBayes贝式计算17 小时前
一键移除复杂物体!Netflix VOID 让视频消除拥有「物理直觉」;告别乱码与解析难题,MDPBench 数据集为「真实复杂场景」文档解析而生
人工智能·机器学习·图像识别
Ricardo-Yang19 小时前
# BPE Tokenizer:从训练规则到推理切分的完整理解
人工智能·深度学习·算法·机器学习·计算机视觉
༺ཌༀ傲世万物ༀད༻20 小时前
如何运用好DeepSeek为自己服务:智能增强的范式革命 || 3.3 元认知强化路径
人工智能·机器学习
郭菁菁20 小时前
职业深度解析:AI/ML Engineer——从模型设计到生产落地
人工智能·深度学习·机器学习