Python环境下基于深度判别迁移学习网络的轴承故障诊断

目前很多机器学习和数据挖掘算法都是基于训练数据和测试数据位于同一特征空间、拥有相同数据分布的假设。然而在现实应用中,该假设却未必存在。一方面,如果将利用某一领域数据训练得到的模型直接应用于新的目标领域,领域之间切实存在的数据差异可能会导致模型效果的骤然下降。另一方面,如果直接在新的目标领域中进行模型的训练,其数据的稀缺和标注的不完整可能会导致监督学习出现严重的过拟合问题,难以达到令人满意的学习效果。因此,如何对不同领域、不同来源的非结构化数据进行合理的数据迁移,实现不同领域的模型适配,成为一个亟需解决的问题。

迁移学习可以从辅助领域的现有数据中迁移知识,帮助完成目标领域的学习任务,即完成从源域到目标域的知识迁移过程,从而有效解决上述问题。特别地,随着大数据时代来临,迁移学习能够将知识从"大数据"迁移到"小数据",解决小数据领域的知识稀缺等问题。

本代码为Python环境下基于深度判别迁移学习网络的轴承故障诊断代码,源域数据为西储大学轴承数据48kcwru_data.npy,目标域数据为为江南大学轴承数据jnudata600_data.npy,所用模块版本如下:

numpy==1.21.5
sklearn==1.0.2
pytorch_lightning==1.7.7
torch==1.10.1+cpu

所用模块如下:

import numpy as np
from sklearn.preprocessing import StandardScaler
from pytorch_lightning.utilities.seed import seed_everything
import torch
import torch.nn as nn
import torch.nn.functional as F
import argparse
from sklearn.utils import shuffle
from torch.utils import data as da
from torchmetrics import MeanMetric, Accuracy

部分代码如下:

#定义加载数据函数
def load_data():
    source_data = np.load(args.cwru_data)
    source_label = np.load(args.cwru_label).argmax(axis=-1)
    target_data = np.load(args.jnu_data)
    target_label = np.load(args.jnu_label).argmax(axis=-1)
    source_data = StandardScaler().fit_transform(source_data.T).T
    target_data = StandardScaler().fit_transform(target_data.T).T
    source_data = np.expand_dims(source_data, axis=1)
    target_data = np.expand_dims(target_data, axis=1)
    source_data, source_label = shuffle(source_data, source_label, random_state=2)
    target_data, target_label = shuffle(target_data, target_label, random_state=2)
    Train_source = Dataset(source_data, source_label)
    Train_target = Dataset(target_data, target_label)
    return Train_source, Train_target

###############################################################
#最大均值差异MMD类
class MMD(nn.Module):
    def __init__(self, m, n):
        super(MMD, self).__init__()
        self.m = m
        self.n = n

    def _mix_rbf_mmd2(self, X, Y, sigmas=(10,), wts=None, biased=True):
        K_XX, K_XY, K_YY, d = self._mix_rbf_kernel(X, Y, sigmas, wts)
        return self._mmd2(K_XX, K_XY, K_YY, const_diagonal=d, biased=biased)

    def _mix_rbf_kernel(self, X, Y, sigmas, wts=None):
        if wts is None:
            wts = [1] * len(sigmas)
        XX = torch.matmul(X, X.t())
        XY = torch.matmul(X, Y.t())
        YY = torch.matmul(Y, Y.t())

        X_sqnorms = torch.diagonal(XX)
        Y_sqnorms = torch.diagonal(YY)

        r = lambda x: torch.unsqueeze(x, 0)
        c = lambda x: torch.unsqueeze(x, 1)

        K_XX, K_XY, K_YY = 0., 0., 0.
        for sigma, wt in zip(sigmas, wts):
            gamma = 1 / (2 * sigma ** 2)
            K_XX += wt * torch.exp(-gamma * (-2 * XX + c(X_sqnorms) + r(X_sqnorms)))
            K_XY += wt * torch.exp(-gamma * (-2 * XY + c(X_sqnorms) + r(Y_sqnorms)))
            K_YY += wt * torch.exp(-gamma * (-2 * YY + c(Y_sqnorms) + r(Y_sqnorms)))
            return K_XX, K_XY, K_YY, torch.sum(torch.tensor(wts))

运行结果如下

Epoch1, train_loss is 2.03397,test_loss is 4.93007, train_accuracy is 0.44475,test_accuracy is 0.18675,train_all_loss is 41.71445,target_cla_loss is 1.61769,source_cla_loss is 3.70468,cda_loss is 6.74935,mda_loss is 37.17306

Epoch2, train_loss is 0.54279,test_loss is 6.41293, train_accuracy is 0.83325,test_accuracy is 0.20800,train_all_loss is 6.87837,target_cla_loss is 1.71905,source_cla_loss is 1.94874,cda_loss is 1.58677,mda_loss is 4.59905

Epoch3, train_loss is 0.18851,test_loss is 5.60176, train_accuracy is 0.93775,test_accuracy is 0.29850,train_all_loss is 5.26101,target_cla_loss is 0.66165,source_cla_loss is 0.54253,cda_loss is 1.02360,mda_loss is 4.54996

Epoch4, train_loss is 0.14104,test_loss is 4.58690, train_accuracy is 0.94850,test_accuracy is 0.30800,train_all_loss is 4.09870,target_cla_loss is 0.54025,source_cla_loss is 0.38254,cda_loss is 0.88701,mda_loss is 3.57343

Epoch5, train_loss is 0.11775,test_loss is 5.07279, train_accuracy is 0.95900,test_accuracy is 0.28300,train_all_loss is 3.27498,target_cla_loss is 0.52239,source_cla_loss is 0.29470,cda_loss is 0.87684,mda_loss is 2.84035

Epoch6, train_loss is 0.08998,test_loss is 5.02790, train_accuracy is 0.97300,test_accuracy is 0.29825,train_all_loss is 3.21299,target_cla_loss is 0.39788,source_cla_loss is 0.21586,cda_loss is 0.76452,mda_loss is 2.88089

Epoch7, train_loss is 0.07695,test_loss is 4.51329, train_accuracy is 0.97975,test_accuracy is 0.31800,train_all_loss is 2.92297,target_cla_loss is 0.40623,source_cla_loss is 0.16808,cda_loss is 0.76679,mda_loss is 2.63759

Epoch8, train_loss is 0.07421,test_loss is 4.26603, train_accuracy is 0.97900,test_accuracy is 0.32400,train_all_loss is 2.65909,target_cla_loss is 0.39052,source_cla_loss is 0.17180,cda_loss is 0.72834,mda_loss is 2.37541

Epoch9, train_loss is 0.05102,test_loss is 3.63495, train_accuracy is 0.98950,test_accuracy is 0.35800,train_all_loss is 2.62192,target_cla_loss is 0.41614,source_cla_loss is 0.11351,cda_loss is 0.77280,mda_loss is 2.38951

Epoch10, train_loss is 0.06574,test_loss is 3.52261, train_accuracy is 0.98850,test_accuracy is 0.33100,train_all_loss is 2.59112,target_cla_loss is 0.48041,source_cla_loss is 0.14138,cda_loss is 0.73876,mda_loss is 2.32783

Epoch11, train_loss is 0.05876,test_loss is 3.86388, train_accuracy is 0.99050,test_accuracy is 0.33475,train_all_loss is 2.45829,target_cla_loss is 0.45235,source_cla_loss is 0.11720,cda_loss is 0.73355,mda_loss is 2.22250

Epoch12, train_loss is 0.05688,test_loss is 3.82415, train_accuracy is 0.99350,test_accuracy is 0.32075,train_all_loss is 2.43463,target_cla_loss is 0.41805,source_cla_loss is 0.10393,cda_loss is 0.68400,mda_loss is 2.22049

Epoch13, train_loss is 0.05224,test_loss is 3.78473, train_accuracy is 0.99300,test_accuracy is 0.32225,train_all_loss is 2.26402,target_cla_loss is 0.42708,source_cla_loss is 0.09958,cda_loss is 0.68476,mda_loss is 2.05325

Epoch14, train_loss is 0.06636,test_loss is 3.89151, train_accuracy is 0.98675,test_accuracy is 0.32200,train_all_loss is 2.42129,target_cla_loss is 0.42966,source_cla_loss is 0.12657,cda_loss is 0.66312,mda_loss is 2.18544

Epoch15, train_loss is 0.05342,test_loss is 3.78424, train_accuracy is 0.99575,test_accuracy is 0.32200,train_all_loss is 2.33275,target_cla_loss is 0.42920,source_cla_loss is 0.10599,cda_loss is 0.64312,mda_loss is 2.11953

Epoch16, train_loss is 0.04968,test_loss is 3.67101, train_accuracy is 0.99750,test_accuracy is 0.32100,train_all_loss is 2.23092,target_cla_loss is 0.43684,source_cla_loss is 0.09945,cda_loss is 0.63847,mda_loss is 2.02393

Epoch17, train_loss is 0.05957,test_loss is 3.67722, train_accuracy is 0.99525,test_accuracy is 0.33000,train_all_loss is 2.27638,target_cla_loss is 0.45589,source_cla_loss is 0.12059,cda_loss is 0.63428,mda_loss is 2.04677

Epoch18, train_loss is 0.05812,test_loss is 3.59771, train_accuracy is 0.99600,test_accuracy is 0.32350,train_all_loss is 2.23080,target_cla_loss is 0.47418,source_cla_loss is 0.11620,cda_loss is 0.61432,mda_loss is 2.00575

Epoch19, train_loss is 0.06287,test_loss is 3.43253, train_accuracy is 0.99350,test_accuracy is 0.32950,train_all_loss is 2.40593,target_cla_loss is 0.48567,source_cla_loss is 0.12299,cda_loss is 0.64320,mda_loss is 2.17005

Epoch20, train_loss is 0.06316,test_loss is 3.51056, train_accuracy is 0.99575,test_accuracy is 0.32475,train_all_loss is 2.31846,target_cla_loss is 0.46683,source_cla_loss is 0.13138,cda_loss is 0.63335,mda_loss is 2.07705

Epoch21, train_loss is 0.05934,test_loss is 3.84920, train_accuracy is 0.99425,test_accuracy is 0.32475,train_all_loss is 2.24664,target_cla_loss is 0.43869,source_cla_loss is 0.11981,cda_loss is 0.62297,mda_loss is 2.02066

Epoch22, train_loss is 0.05423,test_loss is 3.69176, train_accuracy is 0.99500,test_accuracy is 0.34025,train_all_loss is 2.28318,target_cla_loss is 0.46334,source_cla_loss is 0.11229,cda_loss is 0.64396,mda_loss is 2.06016

Epoch23, train_loss is 0.04934,test_loss is 3.35009, train_accuracy is 0.99775,test_accuracy is 0.33525,train_all_loss is 2.29074,target_cla_loss is 0.46118,source_cla_loss is 0.10556,cda_loss is 0.63410,mda_loss is 2.07565

Epoch24, train_loss is 0.05903,test_loss is 3.62366, train_accuracy is 0.99275,test_accuracy is 0.33475,train_all_loss is 2.19408,target_cla_loss is 0.42396,source_cla_loss is 0.12135,cda_loss is 0.61378,mda_loss is 1.96895

Epoch25, train_loss is 0.06076,test_loss is 3.72236, train_accuracy is 0.99475,test_accuracy is 0.33100,train_all_loss is 2.17257,target_cla_loss is 0.41436,source_cla_loss is 0.12382,cda_loss is 0.60274,mda_loss is 1.94704

Epoch26, train_loss is 0.05901,test_loss is 3.59237, train_accuracy is 0.99600,test_accuracy is 0.33225,train_all_loss is 2.34868,target_cla_loss is 0.47314,source_cla_loss is 0.12100,cda_loss is 0.60417,mda_loss is 2.11995

Epoch27, train_loss is 0.05911,test_loss is 3.82265, train_accuracy is 0.99500,test_accuracy is 0.34000,train_all_loss is 2.25363,target_cla_loss is 0.43506,source_cla_loss is 0.12174,cda_loss is 0.58978,mda_loss is 2.02940

Epoch28, train_loss is 0.05933,test_loss is 3.65749, train_accuracy is 0.99750,test_accuracy is 0.34050,train_all_loss is 2.22602,target_cla_loss is 0.47101,source_cla_loss is 0.12723,cda_loss is 0.61243,mda_loss is 1.99045

Epoch29, train_loss is 0.03960,test_loss is 3.72375, train_accuracy is 0.99900,test_accuracy is 0.34075,train_all_loss is 2.20123,target_cla_loss is 0.44257,source_cla_loss is 0.10232,cda_loss is 0.60967,mda_loss is 1.99369

Epoch30, train_loss is 0.05384,test_loss is 3.61430, train_accuracy is 0.99350,test_accuracy is 0.33100,train_all_loss is 2.12823,target_cla_loss is 0.43999,source_cla_loss is 0.11275,cda_loss is 0.60491,mda_loss is 1.91099

Epoch31, train_loss is 0.04751,test_loss is 3.67043, train_accuracy is 0.99650,test_accuracy is 0.35875,train_all_loss is 2.09769,target_cla_loss is 0.39988,source_cla_loss is 0.11007,cda_loss is 0.61966,mda_loss is 1.88566

Epoch32, train_loss is 0.05494,test_loss is 3.66357, train_accuracy is 0.99325,test_accuracy is 0.35325,train_all_loss is 2.16350,target_cla_loss is 0.42613,source_cla_loss is 0.12841,cda_loss is 0.61385,mda_loss is 1.93109

Epoch33, train_loss is 0.04867,test_loss is 3.86881, train_accuracy is 0.99600,test_accuracy is 0.34925,train_all_loss is 2.09730,target_cla_loss is 0.43453,source_cla_loss is 0.11316,cda_loss is 0.60486,mda_loss is 1.88020

Epoch34, train_loss is 0.04144,test_loss is 3.81459, train_accuracy is 0.99775,test_accuracy is 0.34900,train_all_loss is 2.03613,target_cla_loss is 0.43036,source_cla_loss is 0.09037,cda_loss is 0.58916,mda_loss is 1.84382

Epoch35, train_loss is 0.04441,test_loss is 3.66703, train_accuracy is 0.99775,test_accuracy is 0.35975,train_all_loss is 2.19538,target_cla_loss is 0.42232,source_cla_loss is 0.09971,cda_loss is 0.58817,mda_loss is 1.99462

Epoch36, train_loss is 0.05367,test_loss is 3.85576, train_accuracy is 0.99750,test_accuracy is 0.34825,train_all_loss is 2.16997,target_cla_loss is 0.42345,source_cla_loss is 0.12474,cda_loss is 0.59072,mda_loss is 1.94381

Epoch37, train_loss is 0.04486,test_loss is 3.92458, train_accuracy is 0.99500,test_accuracy is 0.34375,train_all_loss is 2.10291,target_cla_loss is 0.42692,source_cla_loss is 0.10405,cda_loss is 0.61382,mda_loss is 1.89478

Epoch38, train_loss is 0.04253,test_loss is 3.61390, train_accuracy is 0.99775,test_accuracy is 0.35300,train_all_loss is 1.99870,target_cla_loss is 0.41496,source_cla_loss is 0.10335,cda_loss is 0.61797,mda_loss is 1.79206

Epoch39, train_loss is 0.04100,test_loss is 3.58168, train_accuracy is 0.99825,test_accuracy is 0.35125,train_all_loss is 2.21055,target_cla_loss is 0.43278,source_cla_loss is 0.09059,cda_loss is 0.58880,mda_loss is 2.01780

Epoch40, train_loss is 0.04859,test_loss is 3.62033, train_accuracy is 0.99700,test_accuracy is 0.35875,train_all_loss is 2.10599,target_cla_loss is 0.39430,source_cla_loss is 0.10756,cda_loss is 0.59355,mda_loss is 1.89964

Epoch41, train_loss is 0.05128,test_loss is 3.67334, train_accuracy is 0.99550,test_accuracy is 0.35000,train_all_loss is 2.04580,target_cla_loss is 0.41525,source_cla_loss is 0.11839,cda_loss is 0.60271,mda_loss is 1.82561

Epoch42, train_loss is 0.05321,test_loss is 3.69000, train_accuracy is 0.99450,test_accuracy is 0.34000,train_all_loss is 2.08382,target_cla_loss is 0.37200,source_cla_loss is 0.10866,cda_loss is 0.57336,mda_loss is 1.88063

Epoch43, train_loss is 0.04858,test_loss is 3.74816, train_accuracy is 0.99575,test_accuracy is 0.34375,train_all_loss is 2.07542,target_cla_loss is 0.42002,source_cla_loss is 0.10167,cda_loss is 0.57270,mda_loss is 1.87447

Epoch44, train_loss is 0.04634,test_loss is 3.87925, train_accuracy is 0.99550,test_accuracy is 0.34175,train_all_loss is 2.08382,target_cla_loss is 0.37761,source_cla_loss is 0.10194,cda_loss is 0.58080,mda_loss is 1.88604

Epoch45, train_loss is 0.05111,test_loss is 3.70675, train_accuracy is 0.99375,test_accuracy is 0.35175,train_all_loss is 2.13403,target_cla_loss is 0.39479,source_cla_loss is 0.11404,cda_loss is 0.58542,mda_loss is 1.92198

Epoch46, train_loss is 0.05028,test_loss is 3.58449, train_accuracy is 0.99525,test_accuracy is 0.35550,train_all_loss is 2.08147,target_cla_loss is 0.41587,source_cla_loss is 0.10975,cda_loss is 0.57340,mda_loss is 1.87279

Epoch47, train_loss is 0.04632,test_loss is 3.54860, train_accuracy is 0.99750,test_accuracy is 0.35175,train_all_loss is 2.09232,target_cla_loss is 0.41676,source_cla_loss is 0.10310,cda_loss is 0.56944,mda_loss is 1.89061

Epoch48, train_loss is 0.05131,test_loss is 3.71509, train_accuracy is 0.99600,test_accuracy is 0.34425,train_all_loss is 2.03649,target_cla_loss is 0.37779,source_cla_loss is 0.11077,cda_loss is 0.57039,mda_loss is 1.83090

Epoch49, train_loss is 0.04696,test_loss is 3.79712, train_accuracy is 0.99675,test_accuracy is 0.36775,train_all_loss is 1.96412,target_cla_loss is 0.38354,source_cla_loss is 0.11067,cda_loss is 0.59040,mda_loss is 1.75606

Epoch50, train_loss is 0.04041,test_loss is 3.61169, train_accuracy is 0.99750,test_accuracy is 0.36175,train_all_loss is 2.00069,target_cla_loss is 0.37321,source_cla_loss is 0.09218,cda_loss is 0.58409,mda_loss is 1.81278

Epoch51, train_loss is 0.04318,test_loss is 3.74727, train_accuracy is 0.99750,test_accuracy is 0.35675,train_all_loss is 2.00346,target_cla_loss is 0.35602,source_cla_loss is 0.09815,cda_loss is 0.57770,mda_loss is 1.81194

Epoch52, train_loss is 0.04124,test_loss is 3.54835, train_accuracy is 0.99700,test_accuracy is 0.35150,train_all_loss is 2.03243,target_cla_loss is 0.36050,source_cla_loss is 0.09370,cda_loss is 0.57510,mda_loss is 1.84517

Epoch53, train_loss is 0.04390,test_loss is 3.82567, train_accuracy is 0.99725,test_accuracy is 0.35275,train_all_loss is 1.99703,target_cla_loss is 0.34785,source_cla_loss is 0.10884,cda_loss is 0.56020,mda_loss is 1.79739

Epoch54, train_loss is 0.04348,test_loss is 3.86695, train_accuracy is 0.99750,test_accuracy is 0.35850,train_all_loss is 1.94795,target_cla_loss is 0.38242,source_cla_loss is 0.10408,cda_loss is 0.56894,mda_loss is 1.74873

Epoch55, train_loss is 0.03718,test_loss is 3.56571, train_accuracy is 0.99775,test_accuracy is 0.36575,train_all_loss is 1.97170,target_cla_loss is 0.37277,source_cla_loss is 0.08584,cda_loss is 0.56435,mda_loss is 1.79215

Epoch56, train_loss is 0.04152,test_loss is 3.39552, train_accuracy is 0.99850,test_accuracy is 0.36575,train_all_loss is 2.02024,target_cla_loss is 0.36317,source_cla_loss is 0.10414,cda_loss is 0.58045,mda_loss is 1.82174

Epoch57, train_loss is 0.03893,test_loss is 3.68062, train_accuracy is 0.99875,test_accuracy is 0.35975,train_all_loss is 1.97693,target_cla_loss is 0.34846,source_cla_loss is 0.09432,cda_loss is 0.57112,mda_loss is 1.79064

Epoch58, train_loss is 0.04239,test_loss is 3.59319, train_accuracy is 0.99750,test_accuracy is 0.36800,train_all_loss is 1.99725,target_cla_loss is 0.37591,source_cla_loss is 0.10367,cda_loss is 0.56670,mda_loss is 1.79933

Epoch59, train_loss is 0.04245,test_loss is 3.66916, train_accuracy is 0.99800,test_accuracy is 0.35800,train_all_loss is 1.99187,target_cla_loss is 0.34185,source_cla_loss is 0.10355,cda_loss is 0.57694,mda_loss is 1.79644

Epoch60, train_loss is 0.04063,test_loss is 3.69465, train_accuracy is 0.99700,test_accuracy is 0.35850,train_all_loss is 1.92894,target_cla_loss is 0.34705,source_cla_loss is 0.09351,cda_loss is 0.56955,mda_loss is 1.74377

Epoch61, train_loss is 0.04399,test_loss is 3.56587, train_accuracy is 0.99650,test_accuracy is 0.35900,train_all_loss is 2.02587,target_cla_loss is 0.37329,source_cla_loss is 0.09890,cda_loss is 0.57672,mda_loss is 1.83196

Epoch62, train_loss is 0.03630,test_loss is 3.75333, train_accuracy is 0.99850,test_accuracy is 0.36125,train_all_loss is 2.06062,target_cla_loss is 0.33665,source_cla_loss is 0.09386,cda_loss is 0.58990,mda_loss is 1.87410

Epoch63, train_loss is 0.04550,test_loss is 3.58529, train_accuracy is 0.99850,test_accuracy is 0.35700,train_all_loss is 2.03326,target_cla_loss is 0.37309,source_cla_loss is 0.11314,cda_loss is 0.55576,mda_loss is 1.82723

Epoch64, train_loss is 0.03636,test_loss is 3.52662, train_accuracy is 0.99900,test_accuracy is 0.36075,train_all_loss is 1.94026,target_cla_loss is 0.39102,source_cla_loss is 0.09922,cda_loss is 0.58333,mda_loss is 1.74360

Epoch65, train_loss is 0.03628,test_loss is 3.86440, train_accuracy is 0.99850,test_accuracy is 0.35875,train_all_loss is 1.99657,target_cla_loss is 0.32306,source_cla_loss is 0.09738,cda_loss is 0.56515,mda_loss is 1.81036

Epoch66, train_loss is 0.04234,test_loss is 3.57190, train_accuracy is 0.99775,test_accuracy is 0.36675,train_all_loss is 1.92315,target_cla_loss is 0.36443,source_cla_loss is 0.10319,cda_loss is 0.55765,mda_loss is 1.72775

Epoch67, train_loss is 0.03892,test_loss is 3.82084, train_accuracy is 0.99650,test_accuracy is 0.35025,train_all_loss is 1.97754,target_cla_loss is 0.32409,source_cla_loss is 0.10061,cda_loss is 0.57595,mda_loss is 1.78693

Epoch68, train_loss is 0.04147,test_loss is 3.64863, train_accuracy is 0.99775,test_accuracy is 0.35175,train_all_loss is 2.04404,target_cla_loss is 0.33164,source_cla_loss is 0.09546,cda_loss is 0.55667,mda_loss is 1.85975

Epoch69, train_loss is 0.03997,test_loss is 3.96786, train_accuracy is 0.99550,test_accuracy is 0.35550,train_all_loss is 1.87499,target_cla_loss is 0.33787,source_cla_loss is 0.09773,cda_loss is 0.55921,mda_loss is 1.68755

Epoch70, train_loss is 0.03543,test_loss is 3.93736, train_accuracy is 0.99975,test_accuracy is 0.35850,train_all_loss is 1.96610,target_cla_loss is 0.34311,source_cla_loss is 0.08989,cda_loss is 0.56598,mda_loss is 1.78530

Epoch71, train_loss is 0.03870,test_loss is 4.00044, train_accuracy is 0.99825,test_accuracy is 0.34775,train_all_loss is 1.97678,target_cla_loss is 0.33252,source_cla_loss is 0.10044,cda_loss is 0.55977,mda_loss is 1.78711

Epoch72, train_loss is 0.03661,test_loss is 4.20446, train_accuracy is 0.99850,test_accuracy is 0.33975,train_all_loss is 1.91947,target_cla_loss is 0.32661,source_cla_loss is 0.09100,cda_loss is 0.55292,mda_loss is 1.74052

Epoch73, train_loss is 0.03299,test_loss is 4.03290, train_accuracy is 0.99975,test_accuracy is 0.35475,train_all_loss is 1.89665,target_cla_loss is 0.32557,source_cla_loss is 0.09065,cda_loss is 0.57111,mda_loss is 1.71633

Epoch74, train_loss is 0.03557,test_loss is 3.74976, train_accuracy is 0.99875,test_accuracy is 0.35050,train_all_loss is 1.94581,target_cla_loss is 0.33794,source_cla_loss is 0.09797,cda_loss is 0.57739,mda_loss is 1.75631

Epoch75, train_loss is 0.03606,test_loss is 3.90655, train_accuracy is 0.99900,test_accuracy is 0.35175,train_all_loss is 1.91040,target_cla_loss is 0.36908,source_cla_loss is 0.09017,cda_loss is 0.56595,mda_loss is 1.72673

Epoch76, train_loss is 0.02996,test_loss is 4.31643, train_accuracy is 0.99850,test_accuracy is 0.35225,train_all_loss is 1.87109,target_cla_loss is 0.35043,source_cla_loss is 0.08601,cda_loss is 0.58625,mda_loss is 1.69141

Epoch77, train_loss is 0.03729,test_loss is 4.11600, train_accuracy is 0.99675,test_accuracy is 0.36625,train_all_loss is 1.98549,target_cla_loss is 0.30353,source_cla_loss is 0.09079,cda_loss is 0.59306,mda_loss is 1.80504

Epoch78, train_loss is 0.03488,test_loss is 4.00549, train_accuracy is 0.99900,test_accuracy is 0.36025,train_all_loss is 1.94518,target_cla_loss is 0.31716,source_cla_loss is 0.09354,cda_loss is 0.56666,mda_loss is 1.76326

Epoch79, train_loss is 0.03735,test_loss is 3.74691, train_accuracy is 0.99775,test_accuracy is 0.36200,train_all_loss is 1.98189,target_cla_loss is 0.36152,source_cla_loss is 0.10226,cda_loss is 0.55654,mda_loss is 1.78782

Epoch80, train_loss is 0.03132,test_loss is 3.66145, train_accuracy is 0.99925,test_accuracy is 0.37600,train_all_loss is 1.92512,target_cla_loss is 0.32335,source_cla_loss is 0.09147,cda_loss is 0.55130,mda_loss is 1.74618

Epoch81, train_loss is 0.04329,test_loss is 3.67632, train_accuracy is 0.99775,test_accuracy is 0.36875,train_all_loss is 2.01323,target_cla_loss is 0.36775,source_cla_loss is 0.12122,cda_loss is 0.53975,mda_loss is 1.80126

Epoch82, train_loss is 0.03796,test_loss is 3.88163, train_accuracy is 0.99800,test_accuracy is 0.36575,train_all_loss is 1.94328,target_cla_loss is 0.34789,source_cla_loss is 0.09737,cda_loss is 0.53522,mda_loss is 1.75760

Epoch83, train_loss is 0.03361,test_loss is 3.93112, train_accuracy is 0.99850,test_accuracy is 0.35125,train_all_loss is 1.94797,target_cla_loss is 0.34964,source_cla_loss is 0.09108,cda_loss is 0.56788,mda_loss is 1.76514

Epoch84, train_loss is 0.03604,test_loss is 3.92195, train_accuracy is 0.99900,test_accuracy is 0.37450,train_all_loss is 1.89947,target_cla_loss is 0.35758,source_cla_loss is 0.09633,cda_loss is 0.56305,mda_loss is 1.71108

Epoch85, train_loss is 0.03087,test_loss is 3.79357, train_accuracy is 0.99850,test_accuracy is 0.37225,train_all_loss is 1.93883,target_cla_loss is 0.33742,source_cla_loss is 0.08896,cda_loss is 0.57650,mda_loss is 1.75848

Epoch86, train_loss is 0.03754,test_loss is 3.96986, train_accuracy is 0.99875,test_accuracy is 0.36800,train_all_loss is 1.87951,target_cla_loss is 0.33196,source_cla_loss is 0.10165,cda_loss is 0.56499,mda_loss is 1.68817

Epoch87, train_loss is 0.03479,test_loss is 4.27059, train_accuracy is 0.99875,test_accuracy is 0.36100,train_all_loss is 1.86776,target_cla_loss is 0.34986,source_cla_loss is 0.10001,cda_loss is 0.55966,mda_loss is 1.67679

Epoch88, train_loss is 0.03385,test_loss is 4.07302, train_accuracy is 0.99900,test_accuracy is 0.36325,train_all_loss is 1.98173,target_cla_loss is 0.32548,source_cla_loss is 0.08992,cda_loss is 0.56596,mda_loss is 1.80266

Epoch89, train_loss is 0.03606,test_loss is 3.76652, train_accuracy is 0.99825,test_accuracy is 0.36950,train_all_loss is 1.99725,target_cla_loss is 0.36634,source_cla_loss is 0.10286,cda_loss is 0.54637,mda_loss is 1.80312

Epoch90, train_loss is 0.03380,test_loss is 3.84020, train_accuracy is 0.99900,test_accuracy is 0.36500,train_all_loss is 1.91125,target_cla_loss is 0.31314,source_cla_loss is 0.08440,cda_loss is 0.53862,mda_loss is 1.74168

Epoch91, train_loss is 0.03329,test_loss is 3.78597, train_accuracy is 0.99875,test_accuracy is 0.36275,train_all_loss is 1.84015,target_cla_loss is 0.35008,source_cla_loss is 0.08478,cda_loss is 0.53946,mda_loss is 1.66642

Epoch92, train_loss is 0.03170,test_loss is 3.90322, train_accuracy is 0.99925,test_accuracy is 0.36850,train_all_loss is 1.84773,target_cla_loss is 0.31886,source_cla_loss is 0.07877,cda_loss is 0.55678,mda_loss is 1.68140

Epoch93, train_loss is 0.02658,test_loss is 4.19532, train_accuracy is 0.99925,test_accuracy is 0.36575,train_all_loss is 1.83341,target_cla_loss is 0.28239,source_cla_loss is 0.06854,cda_loss is 0.56702,mda_loss is 1.67993

Epoch94, train_loss is 0.02931,test_loss is 4.24633, train_accuracy is 0.99950,test_accuracy is 0.36750,train_all_loss is 1.84162,target_cla_loss is 0.28099,source_cla_loss is 0.08070,cda_loss is 0.54899,mda_loss is 1.67792

Epoch95, train_loss is 0.03792,test_loss is 4.27938, train_accuracy is 0.99750,test_accuracy is 0.37175,train_all_loss is 1.89878,target_cla_loss is 0.30210,source_cla_loss is 0.10248,cda_loss is 0.54445,mda_loss is 1.71165

Epoch96, train_loss is 0.02876,test_loss is 3.81267, train_accuracy is 0.99925,test_accuracy is 0.37500,train_all_loss is 1.88203,target_cla_loss is 0.32059,source_cla_loss is 0.07481,cda_loss is 0.55576,mda_loss is 1.71959

Epoch97, train_loss is 0.03724,test_loss is 3.74088, train_accuracy is 0.99775,test_accuracy is 0.37925,train_all_loss is 1.93946,target_cla_loss is 0.33789,source_cla_loss is 0.10656,cda_loss is 0.56191,mda_loss is 1.74292

Epoch98, train_loss is 0.03961,test_loss is 4.07593, train_accuracy is 0.99600,test_accuracy is 0.36750,train_all_loss is 1.90211,target_cla_loss is 0.31981,source_cla_loss is 0.10458,cda_loss is 0.57807,mda_loss is 1.70774

Epoch99, train_loss is 0.02847,test_loss is 4.25489, train_accuracy is 0.99975,test_accuracy is 0.35350,train_all_loss is 1.84025,target_cla_loss is 0.30272,source_cla_loss is 0.07335,cda_loss is 0.59257,mda_loss is 1.67737

Epoch100, train_loss is 0.02855,test_loss is 4.00182, train_accuracy is 0.99850,test_accuracy is 0.36100,train_all_loss is 1.82983,target_cla_loss is 0.28872,source_cla_loss is 0.07271,cda_loss is 0.56430,mda_loss is 1.67182

工学博士,担任《Mechanical System and Signal Processing》审稿专家,担任
《中国电机工程学报》优秀审稿专家,《控制与决策》,《系统工程与电子技术》,《电力系统保护与控制》,《宇航学报》等EI期刊审稿专家。

擅长领域:现代信号处理,机器学习,深度学习,数字孪生,时间序列分析,设备缺陷检测、设备异常检测、设备智能故障诊断与健康管理PHM等。

相关推荐
魔道不误砍柴功几秒前
实际开发中的协变与逆变案例:数据处理流水线
java·开发语言
鲤籽鲲8 分钟前
C# MethodTimer.Fody 使用详解
开发语言·c#·mfc
亚图跨际11 分钟前
Python和R荧光分光光度法
开发语言·python·r语言·荧光分光光度法
Rverdoser20 分钟前
RabbitMQ的基本概念和入门
开发语言·后端·ruby
dj244294570723 分钟前
JAVA中的Lamda表达式
java·开发语言
谢眠28 分钟前
深度学习day3-自动微分
python·深度学习·机器学习
z千鑫38 分钟前
【人工智能】深入理解PyTorch:从0开始完整教程!全文注解
人工智能·pytorch·python·gpt·深度学习·ai编程
流星白龙1 小时前
【C++习题】10.反转字符串中的单词 lll
开发语言·c++
尘浮生1 小时前
Java项目实战II基于微信小程序的校运会管理系统(开发文档+数据库+源码)
java·开发语言·数据库·微信小程序·小程序·maven·intellij-idea