【Python】Section 4: 偏差、方差和超参数 Bias, Variance, and Hyperparameters from HarvardX

1. Bias and Variance

1.1 Test Error and Generalization

We know to evaluate models on both training and test data because models can do well on training data but poorly on new data. When models do well on new data (test data), we call it generalization. Models that "generalize well" are good with new data.

  • High Noise Level: If the data is very noisy we will not be able to generalize very well and we will get a high test error due to this noise.
  • Underfitting: The model is not complex enough to capture the patterns in the data.
  • Overfitting: The model focuses too much on the training data and does not generalize to the test data.

Let's focus in on noise.

1.2 Irreducible and Reducible Error

There are two ways that "noise" can contribute to the generalization error:

  • Irreducible error : This is due to the noise in the data. We can't do anything to decrease the this kind of error. This is also known as aleatoric error.
  • Reducible error : This is due to the model. We can decrease the error due to overfitting and underfitting by improving the model. This kind of error is also known as epistemic error.

1.3 Model Complexity and Reducible Error

Reducible error comes from either underfitting or overfitting . Simple, less complex models are more likely to be underfit. However, as the complexity of our models increase, we are more likely to be overfitting.

On the left we see a simple linear model, a low complexity model that is underfit to the data. While the data points roughly form a curved parabola, the simple linear model predicts a straight line through the center of the plot. On the right we see a high degree polynomial model, a very complex model that is overfit to the data. The data points form the same U-shaped curve as before. This time, the high degree polynomial model does predict along the center of the curve, but the line is very jagged, shooting up and down at unexpected points.

1.4 Bias and Variance

As the model complexity increases, the error on our training set will decrease. We call this the bias. Said another way, the bias will decrease as the model complexity increases. A low bias model will have it's prediction around the true value according to the training set.

Model variance , on the other hand, is the variability among multiple fits of the same model on different training sets. You can think of variance as indicating how sensitive a model is to changes in its training data. More complex model are more sensitive and so variance will increase as the model complexity increases.

1.5 Bias vs Variance: Variance of a Simple model

To visualize variance, lets look at a simple model. Each orange line is the same simple linear model but trained using a different split of the training data. Note that there is not much variation between prediction lines. Even when we fit on 500 different samples, all prediction lines are pretty close to each other.

1.6 Bias vs Variance: Variance of a Complex model

Now let's do the same thing but using a more complex degree-8 polynomial model. Observe how the predictions now vary wildly between samples used to fit the model. When we plot the predictions of 500 different sample fits, the predictions may generally center around the line of the true function but the variation between predictions creates a mess of lines far above and below the true function line.

You may also notice that you can almost pick out the data points selected for the training sample in the first three plots. This is because the prediction line passes through those points exactly. This is indication of the low bias of the mode.

1.7 Bias vs Variance: the Trade-off

Now we see that the error of the bias decreases as our model complexity increases but the variance of the model increases. This is where we see the trade-off: we cannot lower one without increasing the other. We want to find balance between the two to get the lowest bias and variance possible.

Based on the image below, the point where the lines intersect would be a good balance between bias and variance.

You can roughly think of bias as how accurate a model is and variance as how precise it is.

Consider the diagram below:

In the case of high bias and low variance the model shows very similar predictions regardless of what training data is used to fit it. However, these predictions are systematically off; predictions are wrong but all wrong in roughly the same way. It is precise but not accurate .

If we have low bias and high variance then the model is very complex and so very sensitive to changes in the training data. The predictions across the different fits are correct on average, but there is large variability or "spread" in the predictions. It is accurate but not precise .

We ideally wantlow bias andlow variance . This would be a model whose predictions when fit on different training sets would be both accurate (centered on the true value) and precise (low spread).

If we have high bias and high variance , then model is neither accurate nor precise. So predictions are systematically off and there is also a lot of spread. If this is the case then we have a very poor model indeed. We would do well to reassess the steps and assumptions of our modeling approach in this situation.

1.8 Bias vs Variance

Consider this plot of 2,000 best-fit simple linear regression models, each fitted on a different 20-point training set. Note that there is not much variation among the different fits' predictions.

Now consider this plot of 2,000 best-fit degree-20 polynomial models, each fitted on a different 20 point training set. Note the wild variation among the predictions of the different model fits.

1.9 Bias vs Variance (coefficients)

Let's look at the range of the coefficient values for these different models. For the 2,000 different simple linear regression models, we see that there is some variability, but very little when compared to the polynomial fits.

These are the first 10 coefficient values for the 2,000 degree-20 polynomial fits. Be sure to notice the change in scale on the -axis between this plot and the last! The spread of 12 coefficients visualized here vary much more between fits than did those for the simple model. And some of the coefficient values become quite extreme. This means that small variation in the training data can make a huge change in the resulting model's coefficients and thus its predictions.

1.10 Model Selection

Model selection is the application of a principled method to determine the complexity of the model, e.g., choosing a subset of predictors, choosing the degree of the polynomial model, etc.

A strong motivation for performing model selection is to avoid overfitting, which we saw can happen when:there are too many predictors, because:

  1. there are too many predictors, because:
    • the feature space has high dimensionality
    • the polynomial degree is too high
    • too many interaction terms are considered
  2. the coefficients' values are too extreme

We've already seen ways to address the problem of choosing predictors and polynomial degree using greedy algorithms and cross-validation. But what about the second potential source of overfitting? How do we discourage extreme coefficient values in the model parameters?

1.11 Regularization

What we want is low model error. We've been using mean squared error for our model's loss function:

We also want to discourage extreme parameter values. We could also create a loss which is a function of the magnitudes of the model's parameters. We'll call this . We could do this in several ways. For example, we could sum the squares of the parameters or their absolute values.

Not that the summation index starts at 1. The model is not penalized for its which can be interpreted as the intercept.

Now we can combine these two loss functions into a single loss function for our model using regularization.

is the regularization parameter. It controls the relative importance between model error and the regularization term.

: equivalent to regression model using no regularization. : yields a model where all s are 0.

But how do we determine which value of to use? The answer is with cross-validation! We will try many different values of and pick the one that gives us the best cross-validation loss scores

1.12 Regularization: LASSO Regression

LASSO regression: minimize with respect to s.

Note that is the norm of the vector.

There's no need to regularize the bias, , since it is not connected to the predictors.

1.13 Regularization: Ridge Regression

Ridge regression: minimize with respect to s.

Note that is the norm of the vetor.

Again we do not regularize the bias, .

1.14 Ridge regularization with single validation set vs with cross-validation

To emphasize the usefulness of cross-validation, compare these two plots demonstrating ridge regularization using a single validation set and using cross-validation. Note how by taking the average of the 5 folds we can get more reliable results than relying on just one single validation split.

2. Ridge and LASSO

2.1 Ridge and LASSO - Computational complexity

Solution to ridge regression:

LASSO, on the other hand, has no conventional analytical solution , as the L1 norm has no derivative at zero. We can, however, use the concept of subdifferential or subgradient to find a manageable expression.

2.2 Ridge Visualized

The ridge estimator is where the constraint and the loss intersect.

The values of the coefficients decrease as lambda increases, but they are not nullified.

2.3 LASSO visualized

The Lasso estimator tends to zero out parameters as the OLS loss can easily intersect with the constraint on one of the axis.

The values of the coefficients decrease as lambda increases and are nullified fast.

Variable Selection as Regularization

What are the pros and cons of the two approaches?

Since LASSO regression tends to produce zero estimates for a number of model parameters - we say that LASSO solutions are sparse - we consider LASSO to be a variable selection method.

Ridge is faster to compute , but many people prefer using LASSO for variable selection, as well as for suppressing extreme parameter values and therefore being easier to interpret.

2.4 Ridge regularization with validation only: step by step

Here we will go through Ridge regularization using using a single validation set using as our loss.

For ridge regression there exist an analytical solution for the coefficients:

  1. Split the data into train, validation, and test sets,
  2. Iterate over a range of values for in :
    • Determine the that minimizes the using the train data,
    • Record the loss for this using the validation data, .
  3. Select the that minimizes the loss on the validation data,
  4. Refit the model using both train and validation data combined using the selected , , now using , resulting to
  5. Report the on the test set, given the .

2.5 LASSO regularization with validation only: step by step

Here we will go through Lasso regularization using using a single validation set using as our loss .

The steps are largely the same as with Ridge regression except that there is no analytical solution for the coefficients in Lasso regression, so we use a solver.

  1. Split the data into train, validation, and test sets,
  2. Iterate over a range of values for in :
    • Determine the that minimizes the , using the train data. This is done using a solver.
    • Record the loss for this using the validation data, .
  3. Select the that minimizes the loss on the validation data,
  4. Refit the model using both train and validation data combined using the selected , , now using , resulting to
  5. Report the on the test set, , given the .

2.6 Ridge regularization with CV: step by step

Lastly, let us go through Ridge regularization using using a cross-validation using as our loss.

  1. Split the data into train, validation, and test sets,
  2. Split the train data into K folds,
  3. Iterate over these K folds for k in
  4. Iterate over a range of values for in :
    • Determine the that minimizes the using the train data of the fold,
    • Record using the validation data of the fold

At this point we have a 2-D matrix, rows are for different k, and columns are for different values.

  1. Average the for each ,
  2. Find the that minimizes the , resulting to .
  3. Refit the model using the full training data, , resulting to
  4. Report the on the test set, given the
相关推荐
强哥之神9 分钟前
Nexa AI发布OmniAudio-2.6B:一款快速的音频语言模型,专为边缘部署设计
人工智能·深度学习·机器学习·语言模型·自然语言处理·音视频·openai
yusaisai大鱼13 分钟前
tensorflow_probability与tensorflow版本依赖关系
人工智能·python·tensorflow
18号房客13 分钟前
一个简单的深度学习模型例程,使用Keras(基于TensorFlow)构建一个卷积神经网络(CNN)来分类MNIST手写数字数据集。
人工智能·深度学习·机器学习·生成对抗网络·语言模型·自然语言处理·tensorflow
Biomamba生信基地17 分钟前
R语言基础| 功效分析
开发语言·python·r语言·医药
数据分析能量站22 分钟前
神经网络-LeNet
人工智能·深度学习·神经网络·机器学习
CodeClimb32 分钟前
【华为OD-E卷-木板 100分(python、java、c++、js、c)】
java·javascript·c++·python·华为od
夜幕龙39 分钟前
iDP3复现代码数据预处理全流程(二)——vis_dataset.py
人工智能·python·机器人
晚夜微雨问海棠呀1 小时前
长沙景区数据分析项目实现
开发语言·python·信息可视化
小白学大数据1 小时前
高级技术文章:使用 Kotlin 和 Unirest 构建高效的 Facebook 图像爬虫
爬虫·数据分析·kotlin
cdut_suye1 小时前
Linux工具使用指南:从apt管理、gcc编译到makefile构建与gdb调试
java·linux·运维·服务器·c++·人工智能·python