MinMaxScaler 公式
X_scaled = (X - X_min) / (X_max - X_min)
以一列3个值数据(1,2,3)为例:
X_min = min(1, 2, 3) = 1
X_max = max(1, 2, 3) = 3
range = X_max - X_min = 3 - 1 = 2
三个值归一化后的结果:
X_scaled_1 = (1 - 1) / (3 - 1) = 0 / 2 = 0.0
X_scaled_2 = (2 - 1) / (3 - 1) = 1 / 2 = 0.5
X_scaled_3 = (3 - 1) / (3 - 1) = 2 / 2 = 1.0
Pyspark中的稀疏向量
稀疏向量表示法格式:(size, indices, values)
这种格式由三个部分组成:
- size: 向量的总长度
- indices: 非零元素的索引数组
- values: 对应索引位置的实际值数组
例如,稀疏向量 (3,[0,2],[1.0,3.0])
表示:
- 这是一个长度为3的向量
- 在索引0和2的位置有非零值
- 这些非零值分别是1.0和3.0
- 完整的密集表示应为
[1.0, 0.0, 3.0]
// 稀疏向量有很多种表示方法 这是spark自己的表示方式. 在其他框架中可能有其他表示方式.
空稀疏向量 (3,[],[])
(3,[],[])
表示的是一个全零稀疏向量:
- 长度为3
- 没有任何非零元素(indices为空)
- 没有任何值(values为空)
- 完整的密集表示应为
[0.0, 0.0, 0.0]
稀疏向量的优势
在大规模机器学习中,使用稀疏向量格式有显著优势:
- 存储效率: 只存储非零元素,大幅节省内存
- 计算效率: 计算时只处理非零元素,提高处理速度
- 分布式环境兼容: 便于在分布式系统中传输和处理
转换稀疏向量为密集向量
python
from pyspark.ml.functions import vector_to_array,array_to_vector
from pyspark.ml.linalg import Vectors, SparseVector
from pyspark.sql import SparkSession
# 创建SparkSession
spark = SparkSession.builder \
.appName("SparseVectorExample") \
.getOrCreate()
# 创建包含稀疏向量的DataFrame
# 注意:需要正确创建SparseVector对象,而不是简单的列表
df = spark.createDataFrame([
(SparseVector(3, [], []),) # 正确创建全零的稀疏向量
], ["features"]) # 列名为"features"
# 将稀疏向量转换为密集数组
# 注意:vector_to_array函数应用于"features"列,而不是不存在的"scaled_features"列
dense_df = df.withColumn("dense_features_array", vector_to_array("features")).withColumn("dense_features_vector", array_to_vector("dense_features_array"))
# 显示结果
dense_df.show(truncate=False)
print(dense_df.dtypes)
输出:
//原始稀疏向量表示 非稀疏(稠密)向量表示
+---------+--------------------+---------------------+
|features |dense_features_array|dense_features_vector|
+---------+--------------------+---------------------+
|(3,[],[])|[0.0, 0.0, 0.0] |[0.0,0.0,0.0] |
+---------+--------------------+---------------------+
列类型:
[('features', 'vector'), ('dense_features_array', 'array<double>'), ('dense_features_vector', 'vector')]
稀疏向量features() -> 数组(dense_features_array) -> 稠密向量(dense_features_vector)
scikit-learn 归一化 + 手动归一化实现
python
from sklearn.preprocessing import MinMaxScaler
import numpy as np
# data = np.array([[4,2,3],
# [1,5,6]])
data = np.array([
[1,0.1,-1],
[2,1.1,1],
[3,10.1,3]
])
# 手动归一化
feature_range = [0,1] # 要映射的区间
print('列最小值',data.min(axis=0))
print('列最大值',data.max(axis=0))
x_std = (data-data.min(axis=0))/(data.max(axis=0)-data.min(axis=0))
x_scaled = x_std*(feature_range[1]-feature_range[0]) + feature_range[0]
print('手动归一化结果:\n{}'.format(x_scaled))
# 自动归一化
scaler = MinMaxScaler()
print('自动归一化结果:\n{}'.format(scaler.fit_transform(data)))
"""
自动归一化结果:
[[0. 0. 0. ]
[0.5 0.1 0.5]
[1. 1. 1. ]]
"""
输出:
列最小值 [ 1. 0.1 -1. ]
列最大值 [ 3. 10.1 3. ]
手动归一化结果:
[[0. 0. 0. ]
[0.5 0.1 0.5]
[1. 1. 1. ]]
自动归一化结果:
[[0. 0. 0. ]
[0.5 0.1 0.5]
[1. 1. 1. ]]
spark ml 归一化
python
from pyspark.ml.feature import MinMaxScaler
from pyspark.ml.linalg import Vectors
dataFrame = spark.createDataFrame([
(0, Vectors.dense([1.0, 0.1, -1.0]),),
(1, Vectors.dense([2.0, 1.1, 1.0]),),
(2, Vectors.dense([3.0, 10.1, 3.0]),)
], ["id", "features"])
scaler = MinMaxScaler(inputCol="features", outputCol="scaledFeatures")
# Compute summary statistics and generate MinMaxScalerModel
scalerModel = scaler.fit(dataFrame)
# rescale each feature to range [min, max].
scaledData = scalerModel.transform(dataFrame)
print("Features scaled to range: [%f, %f]" % (scaler.getMin(), scaler.getMax()))
scaledData.select("features", "scaledFeatures").show()
print(scaledData.dtypes)
print("scaledFeatures 中稀疏向量(第一行)转稠密向量:")
df_res = scaledData \
.withColumn("scaledFeatures_array", vector_to_array("scaledFeatures")) \
.withColumn("scaledFeatures_vector", array_to_vector("scaledFeatures_array")) \
# .drop('scaledFeatures_array')
df_res.show()
print(df_res.dtypes)
输出:
Features scaled to range: [0.000000, 1.000000]
+--------------+--------------+
| features|scaledFeatures|
+--------------+--------------+
|[1.0,0.1,-1.0]| (3,[],[])|
| [2.0,1.1,1.0]| [0.5,0.1,0.5]|
|[3.0,10.1,3.0]| [1.0,1.0,1.0]|
+--------------+--------------+
[('id', 'bigint'), ('features', 'vector'), ('scaledFeatures', 'vector')]
scaledFeatures 中稀疏向量(第一行)转稠密向量:
+---+--------------+--------------+--------------------+---------------------+
| id| features|scaledFeatures|scaledFeatures_array|scaledFeatures_vector|
+---+--------------+--------------+--------------------+---------------------+
| 0|[1.0,0.1,-1.0]| (3,[],[])| [0.0, 0.0, 0.0]| [0.0,0.0,0.0]|
| 1| [2.0,1.1,1.0]| [0.5,0.1,0.5]| [0.5, 0.1, 0.5]| [0.5,0.1,0.5]|
| 2|[3.0,10.1,3.0]| [1.0,1.0,1.0]| [1.0, 1.0, 1.0]| [1.0,1.0,1.0]|
+---+--------------+--------------+--------------------+---------------------+
[('id', 'bigint'), ('features', 'vector'), ('scaledFeatures', 'vector'), ('scaledFeatures_array', 'array<double>'), ('scaledFeatures_vector', 'vector')]
归一化工具对比
scikit-learn是对每列做归一化
spark mllib minmaxscaler是传入多列组成数组,但是内部还是每列单独做归一化,只是多个待转换列一起输入而已.