应用ANN+SMOTE+Keras Tuner算法进行信用卡交易欺诈侦测

目录

SMOTE:

[ANN:ANN(MLP) 三种预测-CSDN博客](#ANN:ANN(MLP) 三种预测-CSDN博客)

[Keras Tuner:CNN应用Keras Tuner寻找最佳Hidden Layers层数和神经元数量-CSDN博客](#Keras Tuner:CNN应用Keras Tuner寻找最佳Hidden Layers层数和神经元数量-CSDN博客)

数据:

建模:

[SMOTE Sampling:](#SMOTE Sampling:)

[Keras Tuner:](#Keras Tuner:)

SMOTE:

SMOTE(Synthetic Minority Over-sampling Technique)是一种用于处理不均衡数据集的采样方法。在不均衡数据集中,某个类别的样本数量往往很少,这导致了模型对少数类别的预测效果较差。SMOTE采样通过合成新的少数类样本来增加其数量,从而提高模型对少数类样本的学习能力。

SMOTE采样的基本思想是对于每个少数类样本,从其最近的k个最近邻样本中随机选择一个样本,然后在该样本与原始样本之间生成一个合成样本。这样一来,就能增加少数类样本的数量,使得不同类别之间的样本分布更加平衡。

SMOTE采样可以应用于各种机器学习算法中,包括决策树、逻辑回归、支持向量机等。它能够有效地解决不均衡数据集带来的问题,提高模型的预测能力和准确性。

ANN:ANN(MLP) 三种预测-CSDN博客

Keras Tuner:CNN应用Keras Tuner寻找最佳Hidden Layers层数和神经元数量-CSDN博客

数据:

python 复制代码
import numpy as np 
import pandas as pd 
import keras
import matplotlib.pyplot as plt
import seaborn as sns

data = pd.read_csv('creditcard.csv',sep=',')

from sklearn.preprocessing import StandardScaler #数据标准化
data['Amount(Normalized)'] = StandardScaler().fit_transform(data['Amount'].values.reshape(-1,1))
data.iloc[:,[29,31]]

data = data.drop(columns = ['Amount', 'Time'], axis=1) # This columns are not necessary anymore.

X = data.drop('Class', axis=1)
y = data['Class']

from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.3, random_state=42)
# We are transforming data to numpy array to implementing with keras
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(y_train)
y_test = np.array(y_test)

建模:

python 复制代码
from tensorflow import keras
from tensorflow.keras import layers
from kerastuner.tuners import RandomSearch

from keras.models import Sequential
from keras.layers import Dense, Dropout
model = Sequential([
    Dense(units=20, input_dim = X_train.shape[1], activation='relu'),
    Dense(units=24,activation='relu'),
    Dropout(0.5),
    Dense(units=20,activation='relu'),
    Dense(units=24,activation='relu'),
    Dense(1, activation='sigmoid')
])
model.summary()

model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size=30, epochs=5)

score = model.evaluate(X_test, y_test)
print('Test Accuracy: {:.2f}%\nTest Loss: {}'.format(score[1]*100,score[0]))
'''结果:
671/2671 [==============================] - 6s 2ms/step - loss: 0.0029 - accuracy: 0.9994
Test Accuracy: 99.94%
Test Loss: 0.0028619361110031605
'''
python 复制代码
from sklearn.metrics import confusion_matrix, classification_report
y_pred = model.predict(X_test)
y_test = pd.DataFrame(y_test)
cm = confusion_matrix(y_test, y_pred.round())
sns.heatmap(cm, annot=True, fmt='.0f', cmap='cividis_r')
plt.show()#实际上我们要预测为1的数据, 虽然模型准确率很高 但是对于1的预测并没有非常准确

SMOTE Sampling:

python 复制代码
from imblearn.over_sampling import SMOTE
sm = SMOTE(random_state=42)
X_smote, y_smote = sm.fit_resample(X, y)
X_smote = pd.DataFrame(X_smote)
y_smote = pd.DataFrame(y_smote)
y_smote.iloc[:,0].value_counts()

X_train, X_test, y_train, y_test = train_test_split(X_smote, y_smote, test_size=0.3, random_state=0)
X_train = np.array(X_train)
X_test = np.array(X_test)
y_train = np.array(y_train)
y_test = np.array(y_test)
model.compile(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
model.fit(X_train, y_train, batch_size = 30, epochs = 5)


score = model.evaluate(X_test, y_test)
print('Test Accuracy: {:.2f}%\nTest Loss: {}'.format(score[1]*100,score[0]))
'''结果:
5331/5331 [==============================] - 13s 2ms/step - loss: 0.0046 - accuracy: 0.9991
Test Accuracy: 99.91%
Test Loss: 0.004645294509828091
'''
python 复制代码
y_pred = model.predict(X_test)
y_test = pd.DataFrame(y_test)
cm = confusion_matrix(y_test, y_pred.round())
sns.heatmap(cm, annot=True, fmt='.0f')
plt.show()#经过SMOTE Sampling后 对于1的失误预测从刚刚的25降为11

Keras Tuner:

python 复制代码
def build_model(hp):
    model = keras.Sequential()
    for i in range(hp.Int('num_layers', 2, 20)):
        model.add(layers.Dense(units=hp.Int('units_' + str(i),
                                            min_value=32,
                                            max_value=512,
                                            step=32),
                               activation='relu'))
    model.add(layers.Dense(10, activation='softmax'))
    model.compile(
        optimizer=keras.optimizers.Adam(
            hp.Choice('learning_rate', [1e-2, 1e-3, 1e-4])),
        loss='sparse_categorical_crossentropy',
        metrics=['accuracy'])
    return model

tuner = RandomSearch(
    build_model,
    objective='val_accuracy',
    max_trials=10,
    directory='my_dir',
    project_name='helloworld')

tuner.search(X_train, y_train,
             epochs=5,
             validation_data=(X_test, y_test))
相关推荐
GZ_TOGOGO11 分钟前
【2024最新】华为HCIE认证考试流程
大数据·人工智能·网络协议·网络安全·华为
sp_fyf_202411 分钟前
计算机前沿技术-人工智能算法-大语言模型-最新研究进展-2024-10-02
人工智能·神经网络·算法·计算机视觉·语言模型·自然语言处理·数据挖掘
新缸中之脑13 分钟前
Ollama 运行视觉语言模型LLaVA
人工智能·语言模型·自然语言处理
小鹿( ﹡ˆoˆ﹡ )19 分钟前
探索IP协议的神秘面纱:Python中的网络通信
python·tcp/ip·php
卷心菜小温34 分钟前
【BUG】P-tuningv2微调ChatGLM2-6B时所踩的坑
python·深度学习·语言模型·nlp·bug
胡耀超1 小时前
知识图谱入门——3:工具分类与对比(知识建模工具:Protégé、 知识抽取工具:DeepDive、知识存储工具:Neo4j)
人工智能·知识图谱
陈苏同学1 小时前
4. 将pycharm本地项目同步到(Linux)服务器上——深度学习·科研实践·从0到1
linux·服务器·ide·人工智能·python·深度学习·pycharm
唐家小妹1 小时前
介绍一款开源的 Modern GUI PySide6 / PyQt6的使用
python·pyqt
吾名招财1 小时前
yolov5-7.0模型DNN加载函数及参数详解(重要)
c++·人工智能·yolo·dnn
FL16238631291 小时前
[深度学习][python]yolov11+bytetrack+pyqt5实现目标追踪
深度学习·qt·yolo