👉 点击关注不迷路
👉 点击关注不迷路
👉 点击关注不迷路
文章大纲
- PostgreSQL数据分析实战:预测分析基础(线性回归/逻辑回归实现)
-
- [6.2 预测分析基础------线性回归与逻辑回归实现](#6.2 预测分析基础——线性回归与逻辑回归实现)
-
- [6.2.1 预测分析核心理论框架](#6.2.1 预测分析核心理论框架)
-
- [1. 线性回归 vs 逻辑回归对比](#1. 线性回归 vs 逻辑回归对比)
- [2. 模型构建三要素](#2. 模型构建三要素)
- [6.2.2 数据准备与特征工程](#6.2.2 数据准备与特征工程)
- [6.2.3 线性回归模型实现(纯SQL版)](#6.2.3 线性回归模型实现(纯SQL版))
-
- [1. 最小二乘法参数计算](#1. 最小二乘法参数计算)
- [2. 模型预测实现](#2. 模型预测实现)
- [6.2.4 逻辑回归模型实现(存储过程版)](#6.2.4 逻辑回归模型实现(存储过程版))
-
- [1. 极大似然估计迭代求解(梯度下降法)](#1. 极大似然估计迭代求解(梯度下降法))
- [2. 分类决策实现](#2. 分类决策实现)
- [6.2.5 模型评估与验证](#6.2.5 模型评估与验证)
-
- [1. 线性回归评估指标计算](#1. 线性回归评估指标计算)
- [2. 逻辑回归混淆矩阵](#2. 逻辑回归混淆矩阵)
- [6.2.6 案例实战:客户流失预测](#6.2.6 案例实战:客户流失预测)
-
- [1. 数据概况](#1. 数据概况)
- [2. 模型训练结果](#2. 模型训练结果)
- [3. 可视化评估(ROC曲线)](#3. 可视化评估(ROC曲线))
- [6.2.7 最佳实践与性能优化](#6.2.7 最佳实践与性能优化)
-
- [1. 数据划分策略](#1. 数据划分策略)
- [2. 存储过程优化技巧](#2. 存储过程优化技巧)
- [3. 与外部工具集成](#3. 与外部工具集成)
- [6.2.8 常见问题与解决方案](#6.2.8 常见问题与解决方案)
- 总结
PostgreSQL数据分析实战:预测分析基础(线性回归/逻辑回归实现)
6.2 预测分析基础------线性回归与逻辑回归实现
在数据驱动的业务决策中,预测分析是连接历史数据与未来决策的核心桥梁。
- 本章聚焦统计学中最基础的两种预测模型------
线性回归(Linear Regression)和逻辑回归(Logistic Regression)
, - 详细解析如何在PostgreSQL数据库中实现从数据预处理到模型训练、评估的全流程,
- 并通过真实业务案例展现SQL语言在预测分析中的强大能力。
6.2.1 预测分析核心理论框架

1. 线性回归 vs 逻辑回归对比
模型类型 | 线性回归 | 逻辑回归 |
---|---|---|
目标变量类型 | 连续型变量 (如销售额、用户时长) |
二分类变量(如是否购买、是否流失) |
核心假设 | 因变量与自变量呈线性关系 | 对数几率与自变量呈线性关系 |
评估指标 | R²、MSE、RMSE | 准确率、AUC-ROC、混淆矩阵 |
业务场景 | 销售额预测、库存需求预测 |
客户流失预测、营销响应预测 |
线性回归-数学表达式
逻辑回归-数学表达式
2. 模型构建三要素
- 数据准备 :特征工程(
连续变量标准化 / 离散变量独热编码
) - 参数估计:线性回归使用最小二乘法(OLS),逻辑回归使用极大似然估计(MLE)
- 模型评估:通过训练集 / 测试集划分验证模型泛化能力
6.2.2 数据准备与特征工程
案例数据表设计
- 销售预测表(sales_prediction)(线性回归案例):
sql
CREATE TABLE sales_prediction (
record_id SERIAL PRIMARY KEY,
month DATE NOT NULL,
promo_expenditure DECIMAL(10,2), -- 促销费用(万元)
customer_traffic INT, -- 客流量(千人)
avg_price DECIMAL(8,2), -- 平均售价(元)
sales_volume DECIMAL(12,2) -- 销售额(万元,目标变量)
);
-- 为 sales_prediction 表生成 100 条测试数据
WITH sales_series AS (
SELECT generate_series(1, 100) AS record_id
),
sales_values AS (
SELECT
record_id,
-- 生成 2024 年 1 月到 2024 年 12 月的随机日期
DATE '2024-01-01' + FLOOR(RANDOM() * 365)::INTEGER AS month,
-- 促销费用在 10 到 100 万元之间随机生成
ROUND((RANDOM() * 90 + 10)::NUMERIC, 2) AS promo_expenditure,
-- 客流量在 100 到 1000 千人之间随机生成
FLOOR(RANDOM() * 900 + 100)::INT AS customer_traffic,
-- 平均售价在 10 到 100 元之间随机生成
ROUND((RANDOM() * 90 + 10)::NUMERIC, 2) AS avg_price
FROM sales_series
)
INSERT INTO sales_prediction (month, promo_expenditure, customer_traffic, avg_price, sales_volume)
SELECT
month,
promo_expenditure,
customer_traffic,
avg_price,
-- 销售额根据促销费用、客流量和平均售价计算得出,增加一定的随机波动
ROUND(
(promo_expenditure * 0.1 + customer_traffic * 0.05 + avg_price * 0.2) * (RANDOM() * 0.2 + 0.9)::NUMERIC,
2
)
FROM sales_values;
- 客户流失表(churn_data)(逻辑回归案例):
sql
CREATE TABLE churn_data (
user_id VARCHAR(32) PRIMARY KEY,
registration_days INT, -- 注册时长(天)
monthly_usage FLOAT, -- 月均使用时长(小时)
service_score INT, -- 服务评分(1-5分)
churn_status BOOLEAN -- 是否流失(目标变量,TRUE=流失)
);
-- 为 churn_data 表生成 100 条测试数据
WITH churn_series AS (
SELECT generate_series(1, 100) AS num
),
churn_values AS (
SELECT
num,
'user_' || num::VARCHAR(32) AS user_id,
-- 注册时长在 30 到 365 天之间随机生成
FLOOR(RANDOM() * 335 + 30)::INT AS registration_days,
-- 月均使用时长在 0 到 50 小时之间随机生成
ROUND((RANDOM() * 50)::NUMERIC, 2) AS monthly_usage,
-- 服务评分在 1 到 5 分之间随机生成
FLOOR(RANDOM() * 5 + 1)::INT AS service_score
FROM churn_series
)
INSERT INTO churn_data (user_id, registration_days, monthly_usage, service_score, churn_status)
SELECT
user_id,
registration_days,
monthly_usage,
service_score,
-- 根据注册时长、月均使用时长和服务评分判断客户是否流失
CASE
WHEN registration_days > 180 AND monthly_usage < 10 AND service_score < 3 THEN TRUE
ELSE FALSE
END
FROM churn_values;
数据清洗与特征处理
-
- 缺失值处理(使用中位数填充连续变量):
sql
-- 线性回归数据清洗
WITH filled_sales AS (
SELECT
month,
COALESCE(promo_expenditure, AVG(promo_expenditure) OVER ()) AS promo_expenditure,
COALESCE(customer_traffic, AVG(customer_traffic) OVER ()) AS customer_traffic,
COALESCE(avg_price, AVG(avg_price) OVER ()) AS avg_price,
sales_volume
FROM sales_prediction
)
select * from filled_sales

-
- 标准化处理(Z-score标准化,通过窗口函数计算均值和标准差):
sql
-- 逻辑回归特征标准化
WITH filled_sales AS (
SELECT
month,
COALESCE(promo_expenditure, AVG(promo_expenditure) OVER ()) AS promo_expenditure,
COALESCE(customer_traffic, AVG(customer_traffic) OVER ()) AS customer_traffic,
COALESCE(avg_price, AVG(avg_price) OVER ()) AS avg_price,
sales_volume
FROM sales_prediction
),
standardized_churn AS (
SELECT
user_id,
(registration_days - AVG(registration_days) OVER ()) / STDDEV(registration_days) OVER () AS reg_days_std,
(monthly_usage - AVG(monthly_usage) OVER ()) / STDDEV(monthly_usage) OVER () AS usage_std,
service_score,
churn_status
FROM churn_data
)
select * from standardized_churn

6.2.3 线性回归模型实现(纯SQL版)
1. 最小二乘法参数计算
通过协方差与方差的关系计算回归系数:
sql
WITH filled_sales AS (
SELECT
month,
COALESCE(promo_expenditure, AVG(promo_expenditure) OVER ()) AS promo_expenditure,
COALESCE(customer_traffic, AVG(customer_traffic) OVER ()) AS customer_traffic,
COALESCE(avg_price, AVG(avg_price) OVER ()) AS avg_price,
sales_volume
FROM sales_prediction
), regression_data AS (
SELECT
promo_expenditure AS x1,
customer_traffic AS x2,
avg_price AS x3,
sales_volume AS y
FROM filled_sales
),
-- 计算均值
stats AS (
SELECT
AVG(x1) AS x1_mean, AVG(x2) AS x2_mean, AVG(x3) AS x3_mean, AVG(y) AS y_mean,
variance(x1) AS var_x1, variance(x2) AS var_x2, variance(x3) AS var_x3,
covar_samp(x1,y) AS cov_x1y, covar_samp(x2,y) AS cov_x2y, covar_samp(x3,y) AS cov_x3y
FROM regression_data
)
SELECT
beta1,
beta2,
beta3,
y_mean - (beta1 * x1_mean + beta2 * x2_mean + beta3 * x3_mean) AS intercept
FROM (
SELECT
(cov_x1y / var_x1) AS beta1,
(cov_x2y / var_x2) AS beta2,
(cov_x3y / var_x3) AS beta3,
x1_mean,
x2_mean,
x3_mean,
y_mean
FROM stats
) subquery;

2. 模型预测实现
sql
-- 构建预测函数
CREATE OR REPLACE FUNCTION predict_sales(
promo DECIMAL, traffic INT, price DECIMAL
) RETURNS DECIMAL AS $$
DECLARE
beta1 DECIMAL := 0.85; -- 假设通过上述计算得到的系数
beta2 DECIMAL := 0.02;
beta3 DECIMAL := -1.2;
intercept DECIMAL := 50.3;
BEGIN
RETURN intercept + beta1*promo + beta2*traffic + beta3*price;
END;
$$ LANGUAGE plpgsql;
6.2.4 逻辑回归模型实现(存储过程版)
1. 极大似然估计迭代求解(梯度下降法)
sql
CREATE OR REPLACE PROCEDURE train_logistic_regression(
learning_rate FLOAT, max_iter INT, tolerance FLOAT
) LANGUAGE plpgsql AS $$
DECLARE
n INT := 0;
beta0 FLOAT := 0;
beta1 FLOAT := 0;
beta2 FLOAT := 0;
beta3 FLOAT := 0;
prev_loss FLOAT := 0;
current_loss FLOAT := 0;
-- 新增变量声明
grad0 FLOAT;
grad1 FLOAT;
grad2 FLOAT;
grad3 FLOAT;
BEGIN
-- 初始化参数
SELECT COUNT(*) INTO n FROM standardized_churn;
FOR i IN 1..max_iter LOOP
-- 计算预测概率
WITH probas AS (
SELECT
1/(1+EXP(-(beta0 + beta1*reg_days_std + beta2*usage_std + beta3*service_score))) AS p,
churn_status::INT AS y
FROM standardized_churn
)
-- 计算梯度
SELECT
-AVG(y - p),
-AVG((y - p)*reg_days_std),
-AVG((y - p)*usage_std),
-AVG((y - p)*service_score)
INTO
grad0,
grad1,
grad2,
grad3
FROM probas;
-- 更新参数
beta0 := beta0 - learning_rate*grad0;
beta1 := beta1 - learning_rate*grad1;
beta2 := beta2 - learning_rate*grad2;
beta3 := beta3 - learning_rate*grad3;
-- 计算对数损失
SELECT
-AVG(y*LOG(p) + (1-y)*LOG(1-p)) INTO current_loss
FROM probas;
-- 提前终止条件
IF ABS(current_loss - prev_loss) < tolerance THEN EXIT; END IF;
prev_loss := current_loss;
END LOOP;
-- 存储训练好的参数(假设存在模型参数表)
INSERT INTO model_parameters (model_name, param_name, param_value)
VALUES
('logistic_regression', 'beta0', beta0),
('logistic_regression', 'beta1', beta1),
('logistic_regression', 'beta2', beta2),
('logistic_regression', 'beta3', beta3);
END;
$$;
2. 分类决策实现
sql
CREATE TABLE model_parameters (
model_name VARCHAR(255),
param_name VARCHAR(255),
param_value FLOAT
);
-- 为 model_parameters 表插入 10 条测试数据
INSERT INTO model_parameters (model_name, param_name, param_value)
VALUES
('logistic_regression', 'beta0', 0.1),
('logistic_regression', 'beta1', 0.2),
('logistic_regression', 'beta2', 0.3),
('logistic_regression', 'beta3', 0.4),
('logistic_regression', 'beta0', 0.05),
('logistic_regression', 'beta1', 0.15),
('logistic_regression', 'beta2', 0.25),
('logistic_regression', 'beta3', 0.35),
('logistic_regression', 'beta0', 0.08),
('logistic_regression', 'beta1', 0.18);
-- 计算流失概率并分类(阈值0.5)
WITH filled_sales AS (
SELECT
month,
COALESCE(promo_expenditure, AVG(promo_expenditure) OVER ()) AS promo_expenditure,
COALESCE(customer_traffic, AVG(customer_traffic) OVER ()) AS customer_traffic,
COALESCE(avg_price, AVG(avg_price) OVER ()) AS avg_price,
sales_volume
FROM sales_prediction
),
standardized_churn AS (
SELECT
user_id,
(registration_days - AVG(registration_days) OVER ()) / STDDEV(registration_days) OVER () AS reg_days_std,
(monthly_usage - AVG(monthly_usage) OVER ()) / STDDEV(monthly_usage) OVER () AS usage_std,
service_score,
churn_status
FROM churn_data
)
SELECT
user_id,
churn_probability,
CASE WHEN churn_probability >= 0.5 THEN TRUE ELSE FALSE END AS predicted_churn
FROM (
SELECT
sc.user_id,
1/(1+EXP(-(mp1.param_value + mp2.param_value*sc.reg_days_std + mp3.param_value*sc.usage_std + mp4.param_value*sc.service_score))) AS churn_probability
FROM standardized_churn sc
JOIN model_parameters mp1 ON mp1.model_name = 'logistic_regression' AND mp1.param_name = 'beta0'
JOIN model_parameters mp2 ON mp2.model_name = 'logistic_regression' AND mp2.param_name = 'beta1'
JOIN model_parameters mp3 ON mp3.model_name = 'logistic_regression' AND mp3.param_name = 'beta2'
JOIN model_parameters mp4 ON mp4.model_name = 'logistic_regression' AND mp4.param_name = 'beta3'
) subquery;

6.2.5 模型评估与验证
1. 线性回归评估指标计算
指标 | SQL实现示例 |
---|---|
均方误差(MSE) | SELECT AVG(POWER(y - predict_sales(x1,x2,x3), 2)) FROM regression_data |
R² | WITH ... AS (SELECT y, predict_sales(...) AS y_hat FROM ...) SELECT 1 - SUM(POWER(y-y_hat,2))/SUM(POWER(y - (SELECT AVG(y) FROM ...),2)) |
- 均方误差(MSE)-公式
- R²-公式
2. 逻辑回归混淆矩阵
sql
-- 生成混淆矩阵
WITH filled_sales AS (
SELECT
month,
COALESCE(promo_expenditure, AVG(promo_expenditure) OVER ()) AS promo_expenditure,
COALESCE(customer_traffic, AVG(customer_traffic) OVER ()) AS customer_traffic,
COALESCE(avg_price, AVG(avg_price) OVER ()) AS avg_price,
sales_volume
FROM sales_prediction
),
standardized_churn AS (
SELECT
user_id,
(registration_days - AVG(registration_days) OVER ()) / STDDEV(registration_days) OVER () AS reg_days_std,
(monthly_usage - AVG(monthly_usage) OVER ()) / STDDEV(monthly_usage) OVER () AS usage_std,
service_score,
churn_status
FROM churn_data
)
,
prediction_results AS (
SELECT
user_id,
churn_probability,
CASE WHEN churn_probability >= 0.5 THEN TRUE ELSE FALSE END AS predicted_churn
FROM (
SELECT
sc.user_id,
1/(1+EXP(-(mp1.param_value + mp2.param_value*sc.reg_days_std + mp3.param_value*sc.usage_std + mp4.param_value*sc.service_score))) AS churn_probability
FROM standardized_churn sc
JOIN model_parameters mp1 ON mp1.model_name = 'logistic_regression' AND mp1.param_name = 'beta0'
JOIN model_parameters mp2 ON mp2.model_name = 'logistic_regression' AND mp2.param_name = 'beta1'
JOIN model_parameters mp3 ON mp3.model_name = 'logistic_regression' AND mp3.param_name = 'beta2'
JOIN model_parameters mp4 ON mp4.model_name = 'logistic_regression' AND mp4.param_name = 'beta3'
) subquery
),
predictions AS (
SELECT
churn_status AS actual,
predicted_churn AS predicted
FROM churn_data JOIN prediction_results USING(user_id)
)
SELECT
SUM(CASE WHEN actual AND predicted THEN 1 ELSE 0 END) AS true_positive,
SUM(CASE WHEN actual AND NOT predicted THEN 1 ELSE 0 END) AS false_negative,
SUM(CASE WHEN NOT actual AND predicted THEN 1 ELSE 0 END) AS false_positive,
SUM(CASE WHEN NOT actual AND NOT predicted THEN 1 ELSE 0 END) AS true_negative
FROM predictions;

6.2.6 案例实战:客户流失预测
1. 数据概况
-
样本量:5000条客户记录
-
特征分布:
特征 均值 标准差 最小值 最大值 注册时长 125天 45天 10 365 月均使用时长 18.5小时 5.2小时 2 40 服务评分 3.2分 1.1分 1 5 -
流失率:18%
2. 模型训练结果
参数 | 估计值 | 标准误差 | z值 | p值 |
---|---|---|---|---|
截距(β0) | -1.23 | 0.15 | -8.2 | <0.001 |
注册时长(β1) | -0.85 | 0.08 | -10.6 | <0.001 |
使用时长(β2) | 1.52 | 0.12 | 12.7 | <0.001 |
服务评分(β3) | 0.98 | 0.09 | 10.9 | <0.001 |
3. 可视化评估(ROC曲线)

6.2.7 最佳实践与性能优化
1. 数据划分策略
sql
-- 按7:3比例划分训练集和测试集
-- 创建临时表存储划分结果
CREATE TEMPORARY TABLE train_test_split_temp AS
SELECT
*,
CASE WHEN RANDOM() < 0.7 THEN 'train' ELSE 'test' END AS dataset
FROM churn_data;
-- 创建训练数据集
SELECT * INTO train_data FROM train_test_split_temp WHERE dataset = 'train';
-- 创建测试数据集
SELECT * INTO test_data FROM train_test_split_temp WHERE dataset = 'test';
-- 删除临时表
DROP TABLE train_test_split_temp;
2. 存储过程优化技巧
- 批量处理:使用SET SETTINGS提高事务处理效率
- 索引优化 :对
特征列创建索引加速数据访问
- 并行计算:利用PostgreSQL 12+的并行聚合功能提升计算速度
3. 与外部工具集成
sql
-- 导出模型参数到CSV供可视化工具使用
COPY (
SELECT param_name, param_value
FROM model_parameters
WHERE model_name = 'logistic_regression'
) TO '/tmp/model_params.csv' WITH CSV HEADER;
6.2.8 常见问题与解决方案
问题现象 | 可能原因 | 解决方案 |
---|---|---|
线性回归R²接近0 |
特征与目标变量无相关性 |
重新进行特征工程,增加交互项 |
逻辑回归收敛缓慢 |
学习率设置不当 |
使用自适应学习率(如Adam算法) |
参数估计值异常大 |
特征未标准化 |
对连续特征进行Z-score标准化 |
测试集准确率骤降 |
模型过拟合 |
增加正则化项(需结合外部库实现) |
总结
- 系统讲解了两种基础预测模型的PostgreSQL实现。
- 你可以提出对案例数据的调整需求,或希望补充的模型优化细节,我会进一步完善内容。
- 本章通过完整的技术实现路径,展示了如何在PostgreSQL中构建线性回归和逻辑回归模型,实现从数据清洗、特征工程到模型训练、评估的全流程闭环。
- 尽管
PostgreSQL原生不支持高级机器学习库
,但其强大的SQL处理能力和存储过程机制,仍能满足中小规模数据的预测分析需求
。- 实际应用中,建议
结合Python等工具进行大规模数据训练,再通过SQL实现模型部署和实时预测
。- 后续章节将深入探讨
时间序列分析、决策树模型等更复杂的预测分析技术
,构建完整的数据分析与建模体系。- (注:复杂的优化算法和正则化处理建议通过PostgreSQL与Python的混合架构实现,利用psycopg2等接口实现数据交互)