目录
[过滤法(Filter Methods)](#过滤法(Filter Methods))
[包裹法(Wrapper Methods)](#包裹法(Wrapper Methods))
[嵌入法(Embedded Methods)](#嵌入法(Embedded Methods))
[独热编码(One-Hot Encoding)](#独热编码(One-Hot Encoding))
[标签编码(Label Encoding)](#标签编码(Label Encoding))
[基于频率或目标的编码(Frequency/Target Encoding)](#基于频率或目标的编码(Frequency/Target Encoding))
[基于频率的编码(Frequency Encoding)](#基于频率的编码(Frequency Encoding))
[基于目标的编码(Target Encoding)](#基于目标的编码(Target Encoding))
[词袋模型(Bag of Words)](#词袋模型(Bag of Words))
特征选择
特征选择是减少特征数量、提高模型性能的关键步骤,常见的方法包括过滤法、包裹法和嵌入法
过滤法(Filter Methods)
过滤法通过统计指标来筛选特征,独立于模型
方差选择法
选择方差大于阈值的特征
python
## 特征选择
# 过滤法(Filter Methods)
# 1. 方差选择法
import numpy as np
from sklearn.feature_selection import VarianceThreshold
data = np.array([[0, 2, 0, 3], [0, 1, 4, 3], [0, 1, 1, 3]])
selector = VarianceThreshold(threshold=0.5)
selected_data = selector.fit_transform(data)
print(selected_data)
# [[0]
# [4]
# [1]]
相关系数法
选择与目标变量相关性高的特征
python
# 2. 相关系数法
import numpy as np
from sklearn.feature_selection import SelectKBest
from scipy.stats import pearsonr
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = np.array([1, 0, 1])
# 定义score_func,计算每个特征与目标变量y的皮尔逊相关系数
def pearsonr_score(X, y):
scores = []
for feature in X.T: # 遍历每个特征
corr, _ = pearsonr(feature, y) # 计算皮尔逊相关系数
scores.append(abs(corr)) # 取绝对值作为评分
return np.array(scores)
# 使用SelectKBest选择相关性最高的2个特征
selector = SelectKBest(score_func=pearsonr_score, k=2)
selected_data = selector.fit_transform(X, y)
print(selected_data)
# [[2 3]
# [5 6]
# [8 9]]
卡方检验
选择与目标变量相关性高的特征
python
# 卡方检验
import numpy as np
from sklearn.feature_selection import SelectKBest, chi2
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = np.array([1, 0, 1])
selector = SelectKBest(chi2, k=2)
selected_data = selector.fit_transform(X, y)
print(selected_data)
# [[2 3]
# [5 6]
# [8 9]]
包裹法(Wrapper Methods)
包裹法通过训练模型来评估特征子集的性能
递归特征消除(RFE)
递归地移除最不重要的特征
python
# 包裹法(Wrapper Methods)
# 递归特征消除(RFE)
import numpy as np
from sklearn.feature_selection import RFE
from sklearn.linear_model import LogisticRegression
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = np.array([1, 0, 1])
model = LogisticRegression()
selector = RFE(model, n_features_to_select=2)
selected_data = selector.fit_transform(X, y)
print(selected_data)
# [[1 3]
# [4 6]
# [7 9]]
嵌入法(Embedded Methods)
嵌入法在模型训练过程中自动进行特征选择
L1正则化(Lasso回归)
通过L1正则化将不重要的特征权重压缩为0
python
# 嵌入法(Embedded Methods)
# 1. L1正则化(Lasso回归)
import numpy as np
from sklearn.linear_model import Lasso
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = np.array([1, 0, 1])
lasso = Lasso(alpha=0.1)
lasso.fit(X, y)
print(lasso.coef_)
# [0. 0. 0.]
基于树模型的特征重要性
使用树模型(如随机森林)评估特征重要性
python
# 2. 基于树模型的特征重要性
import numpy as np
from sklearn.ensemble import RandomForestClassifier
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
y = np.array([1, 0, 1])
model = RandomForestClassifier()
model.fit(X, y)
print(model.feature_importances_)
# [0.37857143 0.30357143 0.31785714]
特征降维
特征降维是将高维特征投影到低维空间,常见的方法包括PCA和LDA
主成分分析(PCA)
通过线性变换将数据投影到方差最大的方向
python
## 特征降维
# 1. 主成分分析(PCA)
import numpy as np
from sklearn.decomposition import PCA
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9]])
pca = PCA(n_components=2)
reduced_data = pca.fit_transform(X)
print(reduced_data)
# [[-5.19615242e+00 3.62353582e-16]
# [ 0.00000000e+00 0.00000000e+00]
# [ 5.19615242e+00 3.62353582e-16]]
线性判别分析(LDA)
通过线性变换将数据投影到最大化类间区分度的方向
python
# 2. 线性判别分析(LDA)
import numpy as np
from sklearn.discriminant_analysis import LinearDiscriminantAnalysis
X = np.array([[1, 2, 3], [4, 5, 6], [7, 8, 9], [10, 11, 12]])
y = np.array([1, 0, 1, 2])
lda = LinearDiscriminantAnalysis(n_components=2) # 注意: n_components不能超过类别数量减1
reduced_data = lda.fit_transform(X, y)
print(reduced_data)
# [[ 1.06066017]
# [ 0.35355339]
# [-0.35355339]
# [-1.06066017]]
自动特征生成
自动特征生成是通过工具或算法自动生成新的特征
使用Featuretools生成特征
自动从关系型数据中生成聚合和转换特征
python
## 自动特征生成
# 1. 使用Featuretools生成特征
import numpy as np
import featuretools as ft
import pandas as pd
# 创建数据
data = pd.DataFrame({
'transaction_id': range(1, 6), # 添加 transaction_id 列
'customer_id': [1, 2, 1, 2, 3],
'amount': [100, 150, 200, 300, 500],
'timestamp': pd.date_range('2025-01-01', periods=5)
})
# 创建 EntitySet
es = ft.EntitySet(id='transactions')
# 添加 transactions 数据
es.add_dataframe(
dataframe=data,
dataframe_name='transactions',
index='transaction_id', # 指定索引列
time_index='timestamp' # 指定时间索引列
)
# 定义客户实体
customers = pd.DataFrame({'customer_id': [1, 2, 3]})
# 添加 customers 数据
es.add_dataframe(
dataframe=customers,
dataframe_name='customers',
index='customer_id' # 指定索引列
)
# 建立关系
es.add_relationship(
parent_dataframe_name='customers',
parent_column_name='customer_id',
child_dataframe_name='transactions',
child_column_name='customer_id'
)
# 自动特征生成
feature_matrix, feature_defs = ft.dfs(
entityset=es,
target_dataframe_name='customers',
agg_primitives=['mean', 'max', 'std'],
trans_primitives=['day', 'month']
)
print(feature_matrix)
# MAX(transactions.amount) ... STD(transactions.amount)
# customer_id ...
# 1 200.0 ... 70.710678
# 2 300.0 ... 106.066017
# 3 500.0 ... NaN
#
# [3 rows x 3 columns]
特征交互
特征交互是通过组合特征来发现新的信息,常见的方法包括多项式特征和手动组合特征
多项式特征
生成特征的多项式组合
python
## 特征交互
# 1. 多项式特征
import numpy as np
from sklearn.preprocessing import PolynomialFeatures
X = np.array([[1, 2], [3, 4], [5, 6]])
poly = PolynomialFeatures(degree=2, include_bias=False)
poly_features = poly.fit_transform(X)
print(poly_features)
# [[ 1. 2. 1. 2. 4.]
# [ 3. 4. 9. 12. 16.]
# [ 5. 6. 25. 30. 36.]]
手动组合特征
根据业务知识手动组合特征
python
# 2. 手动组合特征
import pandas as pd
data = pd.DataFrame({'A': [1, 2, 3], 'B': [4, 5, 6]})
data['A_plus_B'] = data['A'] + data['B']
data['A_times_B'] = data['A'] * data['B']
print(data)
# A B A_plus_B A_times_B
# 0 1 4 5 4
# 1 2 5 7 10
# 2 3 6 9 18
特征编码
特征编码是将非数值特征转换为数值特征,常见的方法包括独热编码(One-Hot Encoding)、标签编码(Label Encoding)和基于频率或目标的编码(Frequency/Target Encoding)
独热编码(One-Hot Encoding)
将分类特征转换为独热向量
python
## 特征编码
# 1. 独热编码(One-Hot Encoding)
from sklearn.preprocessing import OneHotEncoder
data = np.array([['cat'], ['dog'], ['bird']])
encoder = OneHotEncoder(sparse=False)
encoded_data = encoder.fit_transform(data)
print(encoded_data)
# [[0. 1. 0.]
# [0. 0. 1.]
# [1. 0. 0.]]
标签编码(Label Encoding)
将分类特征转换为整数标签
python
# 2. 标签编码(Label Encoding)
from sklearn.preprocessing import LabelEncoder
data = np.array(['cat', 'dog', 'bird'])
encoder = LabelEncoder()
encoded_data = encoder.fit_transform(data)
print(encoded_data)
# [1 2 0]
基于频率或目标的编码(Frequency/Target Encoding)
对于一些具有大量类别的分类特征,可以使用基于频率或目标的编码方法
基于频率的编码(Frequency Encoding)
将每个类别值替换为它在数据集中出现的频率
python
# 3. 基于频率或目标的编码
# 3.1 基于频率的编码(Frequency Encoding)
import pandas as pd
# 示例数据
data = pd.DataFrame({'category': ['A', 'B', 'A', 'C', 'B', 'A', 'C', 'C', 'B']})
# 计算每个类别的频率
frequency_map = data['category'].value_counts(normalize=True).to_dict()
# 替换类别值为频率
data['frequency_encoded'] = data['category'].map(frequency_map)
print(data)
# category frequency_encoded
# 0 A 0.333333
# 1 B 0.333333
# 2 A 0.333333
# 3 C 0.333333
# 4 B 0.333333
# 5 A 0.333333
# 6 C 0.333333
# 7 C 0.333333
# 8 B 0.333333
基于目标的编码(Target Encoding)
将每个类别值替换为目标变量的统计值(如均值、中位数等)
python
# 3.2 基于目标的编码(Target Encoding)
import pandas as pd
import numpy as np
# 示例数据
data = pd.DataFrame({
'category': ['A', 'B', 'A', 'C', 'B', 'A', 'C', 'C', 'B'],
'target': [1, 0, 1, 1, 0, 1, 0, 1, 0]
})
# 计算每个类别的目标均值
target_mean = data.groupby('category')['target'].mean().to_dict()
# 替换类别值为目标均值
data['target_encoded'] = data['category'].map(target_mean)
print(data)
# category target target_encoded
# 0 A 1 1.000000
# 1 B 0 0.000000
# 2 A 1 1.000000
# 3 C 1 0.666667
# 4 B 0 0.000000
# 5 A 1 1.000000
# 6 C 0 0.666667
# 7 C 1 0.666667
# 8 B 0 0.000000
基于目标的编码(平滑处理)
为了避免过拟合,结合全局目标均值和类别目标均值,通过平滑参数控制两者的权重
python
# 基于目标的编码(平滑处理)
import pandas as pd
import numpy as np
# 示例数据
data = pd.DataFrame({
'category': ['A', 'B', 'A', 'C', 'B', 'A', 'C', 'C', 'B'],
'target': [1, 0, 1, 1, 0, 1, 0, 1, 0]
})
# 全局目标均值
global_mean = data['target'].mean()
# 每个类别的目标均值
target_mean = data.groupby('category')['target'].mean()
# 类别出现的次数
category_count = data['category'].value_counts()
# 平滑参数
alpha = 10
# 计算平滑后的目标均值
smoothed_target_mean = (target_mean * category_count + global_mean * alpha) / (category_count + alpha)
# 替换类别值为平滑后的目标均值
data['smoothed_target_encoded'] = data['category'].map(smoothed_target_mean)
print(data)
# category target smoothed_target_encoded
# 0 A 1 0.658120
# 1 B 0 0.427350
# 2 A 1 0.658120
# 3 C 1 0.581197
# 4 B 0 0.427350
# 5 A 1 0.658120
# 6 C 0 0.581197
# 7 C 1 0.581197
# 8 B 0 0.427350
时间特征提取
时间特征提取是从时间戳中提取有用信息,如年、月、日、小时等
python
## 时间特征提取
import pandas as pd
data = pd.DataFrame({'timestamp': pd.date_range('2025-01-01', periods=5)})
data['year'] = data['timestamp'].dt.year
data['month'] = data['timestamp'].dt.month
data['day'] = data['timestamp'].dt.day
data['hour'] = data['timestamp'].dt.hour
print(data)
# timestamp year month day hour
# 0 2025-01-01 2025 1 1 0
# 1 2025-01-02 2025 1 2 0
# 2 2025-01-03 2025 1 3 0
# 3 2025-01-04 2025 1 4 0
# 4 2025-01-05 2025 1 5 0
文本特征提取
文本特征提取是从文本数据中提取有用信息,常见的方法包括词袋模型(Bag of Words)和TF-IDF
词袋模型(Bag of Words)
将文本转换为单词频率向量
python
## 文本特征提取
# 1. 词袋模型(Bag of Words)
from sklearn.feature_extraction.text import CountVectorizer
corpus = ['This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?']
vectorizer = CountVectorizer()
X = vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names_out()) # ['and' 'document' 'first' 'is' 'one' 'second' 'the' 'third' 'this']
print(X.toarray())
# [[0 1 1 1 0 0 1 0 1]
# [0 2 0 1 0 1 1 0 1]
# [1 0 0 1 1 0 1 1 1]
# [0 1 1 1 0 0 1 0 1]]
TF-IDF
将文本转换为TF-IDF权重向量
python
# 2. TF-IDF
from sklearn.feature_extraction.text import TfidfVectorizer
corpus = ['This is the first document.', 'This document is the second document.', 'And this is the third one.', 'Is this the first document?']
vectorizer = TfidfVectorizer()
X = vectorizer.fit_transform(corpus)
print(vectorizer.get_feature_names_out()) # ['and' 'document' 'first' 'is' 'one' 'second' 'the' 'third' 'this']
print(X.toarray())
# [[0. 0.46979139 0.58028582 0.38408524 0. 0.
# 0.38408524 0. 0.38408524]
# [0. 0.6876236 0. 0.28108867 0. 0.53864762
# 0.28108867 0. 0.28108867]
# [0.51184851 0. 0. 0.26710379 0.51184851 0.
# 0.26710379 0.51184851 0.26710379]
# [0. 0.46979139 0.58028582 0.38408524 0. 0.
# 0.38408524 0. 0.38408524]]