一、参数选择
在逻辑回归建模中,"过拟合"是绕不开的坑------当模型在训练数据上表现完美,却在新数据上一塌糊涂时,大概率是模型复杂度超出了数据所能支撑的范围。而惩罚因子(也叫正则化参数),正是我们解决过拟合、平衡模型拟合度与泛化能力的核心工具。
1.通过LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)函数C参数的选择最优惩罚因子。通过K折交叉验证cross_val_score(lr,x_train_w,y_train_w,cv=8,scoring='recall')函数来实现。
python
scores=[] #不同参数下的验证集评分
c_range=[0.01,0.1,1,10,100]
for i in c_range:
lr=LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)
score=cross_val_score(lr,x_train_w,y_train_w,cv=8,scoring='recall')
score_m=sum(score)/len(score)
scores.append(score_m)
print(score_m)
best_c=c_range[np.argmax(scores)]
print("最优惩罚因子",best_c)
lr=LogisticRegression(C=best_c,penalty='l2',solver='lbfgs',max_iter=1000)
lr.fit(x_train_w,y_train_w)
from sklearn import metrics
train_predict=lr.predict(x_train_w)
print(metrics.classification_report(y_train_w,train_predict))#获得混淆矩阵的准确值,召回值。
cm_plot(y_train_w,train_predict).show()
test_predict=lr.predict(x_test_w)
print(metrics.classification_report(y_test_w,test_predict,digits=6))#获得混淆矩阵的准确值,召回值。
cm_plot(y_test_w,test_predict).show()
二、下采样
下采样的核心是"削减多数类样本",将不同类别的数量平衡一下,减少多的类别的数量
在案例中使用代码:
x_train_w=train_data[train_data['Class']==1]
y_train_w=train_data[train_data['Class']==0]
y_train_w=y_train_w.sample(len(x_train_w))
使用sample函数从y_train_w中抽取x_train_w的数量。
python
data=pd.read_csv("creditcard.csv")
scaler=StandardScaler()
data['Amount']=scaler.fit_transform(data[['Amount']])
data=data.drop(['Time'],axis=1)#axis=1,表示删除列
x=data.drop('Class',axis=1)
y=data.Class
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.2,random_state=0)
train_data=x_train
train_data['Class']=y_train
x_train_w=train_data[train_data['Class']==1]
y_train_w=train_data[train_data['Class']==0]
y_train_w=y_train_w.sample(len(x_train_w))
data_c=pd.concat([x_train_w,y_train_w])
x_train_w_1=data_c.drop('Class',axis=1)
y_train_w_1=data_c.Class
scores=[]
c_range=[0.01,0.1,1,10,100]
for i in c_range:
lr=LogisticRegression(C=i,penalty='l2',solver='lbfgs',max_iter=1000)
score=cross_val_score(lr,x_train_w_1,y_train_w_1,cv=10,scoring='recall')
score_m=sum(score)/len(score)
scores.append(score_m)
print(score_m)
best_c=c_range[np.argmax(scores)]
print("最优因子:",best_c)
lr=LogisticRegression(C=best_c,penalty='l2',solver='lbfgs',max_iter=1000)
lr.fit(x_train_w_1,y_train_w_1)
三、过采样
上采样的核心是"扩充少数类样本"
我们可以使用SMOTE(合成少数类过采样技术)------在少数类样本的特征空间中,找到每个样本的k个近邻,通过插值生成新的少数类样本(如样本A和样本B的近邻,新样本=A+rand(0,1)*(B-A))
python
from imblearn.over_sampling import SMOTE
oversampler=SMOTE(random_state=100)#保证数据拟合效果,随机种子
os_x_train,os_y_train=oversampler.fit_resample(x_train,y_train)#人工拟合数据