机器学习中回归训练的示例

使用perming完成示例

python 复制代码
pip install perming>=1.9.2
pip install polars[pandas]

下载数据集

数据清洗和预处理

python 复制代码
import numpy
import pandas
df = pandas.read_csv('../data/uci_gbm_data.txt', sep='   ', engine='python')
df.head()
python 复制代码
Lever position (lp) [ ]	Ship speed (v) [knots]	Gas Turbine shaft torque (GTT) [kN m]	Gas Turbine rate of revolutions (GTn) [rpm]	Gas Generator rate of revolutions (GGn) [rpm]	Starboard Propeller Torque (Ts) [kN]	Port Propeller Torque (Tp) [kN]	HP Turbine exit temperature (T48) [C]	GT Compressor inlet air temperature (T1) [C]	GT Compressor outlet air temperature (T2) [C]	HP Turbine exit pressure (P48) [bar]	GT Compressor inlet air pressure (P1) [bar]	GT Compressor outlet air pressure (P2) [bar]	Gas Turbine exhaust gas pressure (Pexh) [bar]	Turbine Injecton Control (TIC) [%]	Fuel flow (mf) [kg/s]	GT Compressor decay state coefficient.	GT Turbine decay state coefficient.
0	1.138	3.0	289.964	1349.489	6677.380	7.584	7.584	464.006	288.0	550.563	1.096	0.998	5.947	1.019	7.137	0.082	0.95	0.975
1	2.088	6.0	6960.180	1376.166	6828.469	28.204	28.204	635.401	288.0	581.658	1.331	0.998	7.282	1.019	10.655	0.287	0.95	0.975
2	3.144	9.0	8379.229	1386.757	7111.811	60.358	60.358	606.002	288.0	587.587	1.389	0.998	7.574	1.020	13.086	0.259	0.95	0.975
3	4.161	12.0	14724.395	1547.465	7792.630	113.774	113.774	661.471	288.0	613.851	1.658	0.998	9.007	1.022	18.109	0.358	0.95	0.975
4	5.140	15.0	21636.432	1924.313	8494.777	175.306	175.306	731.494	288.0	645.642	2.078	0.998	11.197	1.026	26.373	0.522	0.95	0.975

转换数据形式到Numpy

python 复制代码
df = df.to_numpy()
values = df[:,-1]
features = df[:,:-1]
features.shape, values.shape
python 复制代码
((11934, 17), (11934,))

加载机器学习过程

python 复制代码
import perming
main = perming.Box(17, 1, (30,), criterion='MSELoss', batch_size=4, activation='relu', inplace_on=True, solver='adam', learning_rate_init=0.01)
# main = perming.Regressier(17, (30,), batch_size=4, activation='relu', solver='adam', learning_rate_init=0.01)
# main = perming.COMMON_MODELS['Regression'](17, (30,), batch_size=4, activation='relu', solver='adam', learning_rate_init=0.01)
main.print_config()
python 复制代码
MLP(
  (mlp): Sequential(
    (Linear0): Linear(in_features=17, out_features=30, bias=True)
    (Activation0): ReLU(inplace=True)
    (Linear1): Linear(in_features=30, out_features=1, bias=True)
  )
)
OrderedDict([('torch -v', '1.7.1+cu101'),
             ('criterion', MSELoss()),
             ('batch_size', 4),
             ('solver',
              Adam (
              Parameter Group 0
                  amsgrad: False
                  betas: (0.9, 0.99)
                  eps: 1e-08
                  lr: 0.01
                  weight_decay: 0
              )),
             ('lr_scheduler', None),
             ('device', device(type='cuda'))])

加载数据集到DataLoader

python 复制代码
main.data_loader(features, values, random_seed=0)

训练阶段和加速验证

python 复制代码
main.train_val(num_epochs=2, interval=100, early_stop=True)
python 复制代码
Epoch [1/2], Step [100/2387], Training Loss: 23.0912, Validation Loss: 24.5740
Epoch [1/2], Step [200/2387], Training Loss: 291.9099, Validation Loss: 6.7348
Epoch [1/2], Step [300/2387], Training Loss: 5637.1328, Validation Loss: 1480.3076
Epoch [1/2], Step [400/2387], Training Loss: 1211.0406, Validation Loss: 210.9741
Epoch [1/2], Step [500/2387], Training Loss: 90.4388, Validation Loss: 23.6573
Epoch [1/2], Step [600/2387], Training Loss: 67.0454, Validation Loss: 24.6701
Epoch [1/2], Step [700/2387], Training Loss: 1253.5343, Validation Loss: 1144.0096
Epoch [1/2], Step [800/2387], Training Loss: 39.3887, Validation Loss: 257.6939
Epoch [1/2], Step [900/2387], Training Loss: 0.9986, Validation Loss: 1.1887
Epoch [1/2], Step [1000/2387], Training Loss: 30.2453, Validation Loss: 9.7175
Epoch [1/2], Step [1100/2387], Training Loss: 264.4302, Validation Loss: 19.0528
Epoch [1/2], Step [1200/2387], Training Loss: 5.2984, Validation Loss: 8.8709
Epoch [1/2], Step [1300/2387], Training Loss: 0.0152, Validation Loss: 0.3077
Epoch [1/2], Step [1400/2387], Training Loss: 0.0118, Validation Loss: 0.0014
Epoch [1/2], Step [1500/2387], Training Loss: 0.3608, Validation Loss: 0.3265
Epoch [1/2], Step [1600/2387], Training Loss: 5616.9810, Validation Loss: 54.1350
Epoch [1/2], Step [1700/2387], Training Loss: 1.0014, Validation Loss: 0.3494
Epoch [1/2], Step [1800/2387], Training Loss: 0.0025, Validation Loss: 0.0249
Epoch [1/2], Step [1900/2387], Training Loss: 0.0008, Validation Loss: 0.0195
Epoch [1/2], Step [2000/2387], Training Loss: 0.0041, Validation Loss: 0.0234
Epoch [1/2], Step [2100/2387], Training Loss: 0.2388, Validation Loss: 0.0222
Process stop at epoch [1/2] with patience 10 within tolerance 0.001

已训练的参数测试

python 复制代码
main.test()
python 复制代码
loss of Box on the 1196 test dataset: 0.14259785413742065.
OrderedDict([('problem', 'regression'),
             ('loss',
              {'train': 0.18060052394866943,
               'val': 0.025247152894735336,
               'test': 0.14259785413742065})])

保存模型和导入模型参数

python 复制代码
main.save(False, '../models/ucigbm.ckpt')
python 复制代码
main.load(False, '../models/ucigbm.ckpt')
相关推荐
NAGNIP24 分钟前
一文搞懂深度学习中的通用逼近定理!
人工智能·算法·面试
冬奇Lab2 小时前
一天一个开源项目(第36篇):EverMemOS - 跨 LLM 与平台的长时记忆 OS,让 Agent 会记忆更会推理
人工智能·开源·资讯
冬奇Lab2 小时前
OpenClaw 源码深度解析(一):Gateway——为什么需要一个"中枢"
人工智能·开源·源码阅读
AngelPP5 小时前
OpenClaw 架构深度解析:如何把 AI 助手搬到你的个人设备上
人工智能
宅小年5 小时前
Claude Code 换成了Kimi K2.5后,我再也回不去了
人工智能·ai编程·claude
九狼6 小时前
Flutter URL Scheme 跨平台跳转
人工智能·flutter·github
ZFSS6 小时前
Kimi Chat Completion API 申请及使用
前端·人工智能
天翼云开发者社区7 小时前
春节复工福利就位!天翼云息壤2500万Tokens免费送,全品类大模型一键畅玩!
人工智能·算力服务·息壤
知识浅谈7 小时前
教你如何用 Gemini 将课本图片一键转为精美 PPT
人工智能
Ray Liang8 小时前
被低估的量化版模型,小身材也能干大事
人工智能·ai·ai助手·mindx