第T4周:TensorFlow实现猴痘识别

目标

实现猴痘病例图片的准确实别
具体实现

(一)环境

语言环境 :Python 3.10
编 译 器: PyCharm
框 架: TensorFlow

(二)具体步骤:

1.使用GPU
# 使用GPU  
gpus = tf.config.list_physical_devices("GPU")  
if gpus:  
    gpu0 = gpus[0]  
    tf.config.experimental.set_memory_growth(gpu0, True)  # 设置GPU显存用量按需使用  
    tf.config.set_visible_devices([gpu0],"GPU")  
  
print(gpus)

[PhysicalDevice(name='/physical_device:GPU:0', device_type='GPU')]
2.导入猴痘图片数据,查看基本情况
# 查看一下数据的基本情况  
data_dir = "./datasets/mp/"  
data_dir = pathlib.Path(data_dir)  # 转换成Path对象,便于后续访问  
image_count = len(list(data_dir.glob('*/*.jpg')))   # 遍历data_dir下面所有的.jpg图片(包含所有子目录)。  
print("图片总数量为:", image_count)  
MonkeyPox = list(data_dir.glob('MonkeyPox/*.jpg'))  # 遍历data_dir子目录MonkeyPox下所有的.jpg图片  
print("猴痘图片数量为:", len(MonkeyPox))  
# print(MonkeyPox[1])  
img = PIL.Image.open(MonkeyPox[1])    # 查看一张猴痘的图片,看看是什么样子  
img.show()

图片总数量为: 2142
猴痘图片数量为: 980
3. 数据预处理,加载数据
# 数据预处理,并将数据加载到dataset中  
batch_size = 32  
img_width = 224  
img_height = 224  
  
train_ds = tf.keras.preprocessing.image_dataset_from_directory(  
    directory=data_dir,     # 数据图片所在的目录,如果下面的labels为inferred则包含子目录  
    labels="inferred",      # 默认为inferred表示从目录结果中获取labels,如果为None就是没有labels,或者是一个labels的元组/列表。  
    validation_split=0.2,   # 0-1之间的数,表示为验证保留的数据部分,这里相当于保留20%作为验证数据  
    subset="training",      # 返回数据的子集,从"training"/"validation"/"both"三个中选择,这里只返回训练数据子集  
    shuffle=True,           # 打乱数据集,默认是True,就是打乱。如果为False,则按字母数字顺序进行排序。  
    seed=123,               # 打乱数据的随机种子,不改变这个数字,每次的打乱顺序应该是一样的  
    image_size=(img_height, img_width),     # 图片重新设置大小,如果不设定,默认是(256, 256)  
    batch_size=batch_size   # 数据批次大小,默认是32.假如设置为None则不进行批次处理  
)  

Found 2142 files belonging to 2 classes.
Using 1714 files for training.

val_ds = tf.keras.preprocessing.image_dataset_from_directory(  
    directory=data_dir,  
    validation_split=0.2,  
    subset="validation",  
    seed=123,  
    image_size=(img_height, img_width),  
    batch_size=batch_size  
)

Found 2142 files belonging to 2 classes.
Using 428 files for validation.

# 查看数据分类  
class_names = train_ds.class_names  
print("数据分类:", class_names)

数据分类: ['Monkeypox', 'Others']
4. 数据可视化
# 数据可视化  
plt.figure(figsize=(20, 5))  
for images, labels in train_ds.take(1):  # 取train_da的1个元素  
    print(images.shape)     # 因为数据集是按批次的组织的(上面设置了batch_size),因为1个元素也有32张图片  
    for i in range(20):  
        ax = plt.subplot(2, 10, i + 1)  # 2行10列,索引号是从1开始的  
        plt.imshow(images[i].numpy().astype("uint8"))   # 接受一个array  
        plt.title(class_names[labels[i]])   # 标题  
        plt.axis("off")     # 关闭中轴线和标签  
plt.show()
5. 检查数据类型
# 检查数据类型  
for image_batch, labels_batch in train_ds:  
    print(image_batch.shape)  
    print(labels_batch.shape)  
    break

(32, 224, 224, 3)
(32,)
6.配置数据集,加速
# 配置数据库,加速  
AUTOTUNE = tf.data.AUTOTUNE  
train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)  
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)
7.构建CNN网络模型
num_classes = len(class_names)
model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构
8.编码模型
# 设置优化器
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])
9.训练模型
# 训练模型  
from tensorflow.keras.callbacks import ModelCheckpoint  
epochs = 50  
checkpointer = ModelCheckpoint('./models/Monkeypox_best_model.h5',  # 模型保存的路径  
                               monitor='val_accuracy',  # 监视的值,  
                               verbose=1,   # 信息展示模式  
                               save_best_only=True,     # 根据这个值来判断是不是比上一次更优,如果更优则保存  
                               save_weights_only=True   # 只保存模型的权重  
                               )  
history = model.fit(  
    train_ds,  
    validation_data=val_ds,  
    epochs=epochs,  
    callbacks=[checkpointer]  
)

整个过程如下,第一轮训练都会根据monitor监视的值是否有改进,来判断是否要保存该轮的模型.前面9轮都有改进,都在保存,后面没有改进就没有保存:

Epoch 1/50
2024-10-07 13:20:42.878241: I tensorflow/stream_executor/cuda/cuda_dnn.cc:384] Loaded cuDNN version 8101
2024-10-07 13:20:44.358266: W tensorflow/stream_executor/gpu/redzone_allocator.cc:314] INTERNAL: ptxas exited with non-zero error code -1, output: 
Relying on driver to perform ptx compilation. 
Modify $PATH to customize ptxas location.
This message will be only logged once.
2024-10-07 13:20:45.626487: I tensorflow/stream_executor/cuda/cuda_blas.cc:1614] TensorFloat-32 will be used for the matrix multiplication. This will only be logged once.
52/54 [===========================>..] - ETA: 0s - loss: 0.7343 - accuracy: 0.5200
Epoch 1: val_accuracy improved from -inf to 0.64720, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 8s 36ms/step - loss: 0.7324 - accuracy: 0.5216 - val_loss: 0.6843 - val_accuracy: 0.6472
Epoch 2/50
52/54 [===========================>..] - ETA: 0s - loss: 0.6673 - accuracy: 0.5812
Epoch 2: val_accuracy did not improve from 0.64720
54/54 [==============================] - 1s 26ms/step - loss: 0.6667 - accuracy: 0.5846 - val_loss: 0.6560 - val_accuracy: 0.5911
Epoch 3/50
52/54 [===========================>..] - ETA: 0s - loss: 0.6356 - accuracy: 0.6466
Epoch 3: val_accuracy improved from 0.64720 to 0.67290, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.6373 - accuracy: 0.6435 - val_loss: 0.6179 - val_accuracy: 0.6729
Epoch 4/50
54/54 [==============================] - ETA: 0s - loss: 0.6140 - accuracy: 0.6744
Epoch 4: val_accuracy improved from 0.67290 to 0.69159, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.6140 - accuracy: 0.6744 - val_loss: 0.5860 - val_accuracy: 0.6916
Epoch 5/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5892 - accuracy: 0.6921
Epoch 5: val_accuracy improved from 0.69159 to 0.71729, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.5913 - accuracy: 0.6902 - val_loss: 0.5633 - val_accuracy: 0.7173
Epoch 6/50
52/54 [===========================>..] - ETA: 0s - loss: 0.5391 - accuracy: 0.7345
Epoch 6: val_accuracy improved from 0.71729 to 0.74065, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.5373 - accuracy: 0.7375 - val_loss: 0.5284 - val_accuracy: 0.7407
Epoch 7/50
52/54 [===========================>..] - ETA: 0s - loss: 0.4945 - accuracy: 0.7539
Epoch 7: val_accuracy improved from 0.74065 to 0.76636, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.4927 - accuracy: 0.7567 - val_loss: 0.4725 - val_accuracy: 0.7664
Epoch 8/50
54/54 [==============================] - ETA: 0s - loss: 0.4634 - accuracy: 0.7812
Epoch 8: val_accuracy improved from 0.76636 to 0.77336, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 29ms/step - loss: 0.4634 - accuracy: 0.7812 - val_loss: 0.4631 - val_accuracy: 0.7734
Epoch 9/50
52/54 [===========================>..] - ETA: 0s - loss: 0.4212 - accuracy: 0.8012
Epoch 9: val_accuracy improved from 0.77336 to 0.80841, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 28ms/step - loss: 0.4221 - accuracy: 0.8016 - val_loss: 0.4328 - val_accuracy: 0.8084
Epoch 10/50
54/54 [==============================] - ETA: 0s - loss: 0.3982 - accuracy: 0.8320
Epoch 10: val_accuracy did not improve from 0.80841
54/54 [==============================] - 1s 26ms/step - loss: 0.3982 - accuracy: 0.8320 - val_loss: 0.4337 - val_accuracy: 0.8037
Epoch 11/50
53/54 [============================>.] - ETA: 0s - loss: 0.3518 - accuracy: 0.8543
Epoch 11: val_accuracy did not improve from 0.80841
54/54 [==============================] - 2s 28ms/step - loss: 0.3512 - accuracy: 0.8547 - val_loss: 0.4281 - val_accuracy: 0.7967
Epoch 12/50
54/54 [==============================] - ETA: 0s - loss: 0.3193 - accuracy: 0.8804
Epoch 12: val_accuracy improved from 0.80841 to 0.82710, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.3193 - accuracy: 0.8804 - val_loss: 0.4022 - val_accuracy: 0.8271
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.2932 - accuracy: 0.8915
Epoch 13: val_accuracy did not improve from 0.82710
54/54 [==============================] - 2s 30ms/step - loss: 0.2932 - accuracy: 0.8915 - val_loss: 0.3839 - val_accuracy: 0.8178
Epoch 14/50
54/54 [==============================] - ETA: 0s - loss: 0.2712 - accuracy: 0.8979
Epoch 14: val_accuracy improved from 0.82710 to 0.84346, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 33ms/step - loss: 0.2712 - accuracy: 0.8979 - val_loss: 0.3866 - val_accuracy: 0.8435
Epoch 15/50
54/54 [==============================] - ETA: 0s - loss: 0.2556 - accuracy: 0.8985
Epoch 15: val_accuracy improved from 0.84346 to 0.85514, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.2556 - accuracy: 0.8985 - val_loss: 0.3688 - val_accuracy: 0.8551
Epoch 16/50
54/54 [==============================] - ETA: 0s - loss: 0.2380 - accuracy: 0.9014
Epoch 16: val_accuracy did not improve from 0.85514
54/54 [==============================] - 2s 29ms/step - loss: 0.2380 - accuracy: 0.9014 - val_loss: 0.3657 - val_accuracy: 0.8505
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.2155 - accuracy: 0.9189
Epoch 17: val_accuracy did not improve from 0.85514
54/54 [==============================] - 2s 31ms/step - loss: 0.2155 - accuracy: 0.9189 - val_loss: 0.3662 - val_accuracy: 0.8435
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.2019 - accuracy: 0.9230
Epoch 18: val_accuracy did not improve from 0.85514
54/54 [==============================] - 2s 29ms/step - loss: 0.2019 - accuracy: 0.9230 - val_loss: 0.4061 - val_accuracy: 0.8388
Epoch 19/50
54/54 [==============================] - ETA: 0s - loss: 0.1832 - accuracy: 0.9294
Epoch 19: val_accuracy did not improve from 0.85514
54/54 [==============================] - 2s 29ms/step - loss: 0.1832 - accuracy: 0.9294 - val_loss: 0.4042 - val_accuracy: 0.8341
Epoch 20/50
53/54 [============================>.] - ETA: 0s - loss: 0.1685 - accuracy: 0.9370
Epoch 20: val_accuracy improved from 0.85514 to 0.86916, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 31ms/step - loss: 0.1700 - accuracy: 0.9347 - val_loss: 0.3639 - val_accuracy: 0.8692
Epoch 21/50
54/54 [==============================] - ETA: 0s - loss: 0.1757 - accuracy: 0.9364
Epoch 21: val_accuracy did not improve from 0.86916
54/54 [==============================] - 2s 29ms/step - loss: 0.1757 - accuracy: 0.9364 - val_loss: 0.3550 - val_accuracy: 0.8621
Epoch 22/50
54/54 [==============================] - ETA: 0s - loss: 0.1433 - accuracy: 0.9469
Epoch 22: val_accuracy did not improve from 0.86916
54/54 [==============================] - 2s 29ms/step - loss: 0.1433 - accuracy: 0.9469 - val_loss: 0.3699 - val_accuracy: 0.8668
Epoch 23/50
53/54 [============================>.] - ETA: 0s - loss: 0.1362 - accuracy: 0.9501
Epoch 23: val_accuracy did not improve from 0.86916
54/54 [==============================] - 2s 29ms/step - loss: 0.1373 - accuracy: 0.9504 - val_loss: 0.3753 - val_accuracy: 0.8505
Epoch 24/50
53/54 [============================>.] - ETA: 0s - loss: 0.1232 - accuracy: 0.9578
Epoch 24: val_accuracy improved from 0.86916 to 0.87383, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 31ms/step - loss: 0.1224 - accuracy: 0.9586 - val_loss: 0.3653 - val_accuracy: 0.8738
Epoch 25/50
53/54 [============================>.] - ETA: 0s - loss: 0.1109 - accuracy: 0.9670
Epoch 25: val_accuracy did not improve from 0.87383
54/54 [==============================] - 2s 29ms/step - loss: 0.1105 - accuracy: 0.9673 - val_loss: 0.3940 - val_accuracy: 0.8668
Epoch 26/50
54/54 [==============================] - ETA: 0s - loss: 0.1088 - accuracy: 0.9667
Epoch 26: val_accuracy did not improve from 0.87383
54/54 [==============================] - 2s 29ms/step - loss: 0.1088 - accuracy: 0.9667 - val_loss: 0.3898 - val_accuracy: 0.8645
Epoch 27/50
54/54 [==============================] - ETA: 0s - loss: 0.1062 - accuracy: 0.9650
Epoch 27: val_accuracy improved from 0.87383 to 0.88084, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.1062 - accuracy: 0.9650 - val_loss: 0.3874 - val_accuracy: 0.8808
Epoch 28/50
54/54 [==============================] - ETA: 0s - loss: 0.0984 - accuracy: 0.9697
Epoch 28: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 29ms/step - loss: 0.0984 - accuracy: 0.9697 - val_loss: 0.3873 - val_accuracy: 0.8785
Epoch 29/50
54/54 [==============================] - ETA: 0s - loss: 0.0879 - accuracy: 0.9726
Epoch 29: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 29ms/step - loss: 0.0879 - accuracy: 0.9726 - val_loss: 0.4120 - val_accuracy: 0.8738
Epoch 30/50
54/54 [==============================] - ETA: 0s - loss: 0.1058 - accuracy: 0.9650
Epoch 30: val_accuracy did not improve from 0.88084
54/54 [==============================] - 2s 30ms/step - loss: 0.1058 - accuracy: 0.9650 - val_loss: 0.3867 - val_accuracy: 0.8715
Epoch 31/50
54/54 [==============================] - ETA: 0s - loss: 0.0714 - accuracy: 0.9790
Epoch 31: val_accuracy improved from 0.88084 to 0.88318, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.0714 - accuracy: 0.9790 - val_loss: 0.4207 - val_accuracy: 0.8832
Epoch 32/50
54/54 [==============================] - ETA: 0s - loss: 0.0741 - accuracy: 0.9813
Epoch 32: val_accuracy did not improve from 0.88318
54/54 [==============================] - 2s 30ms/step - loss: 0.0741 - accuracy: 0.9813 - val_loss: 0.4050 - val_accuracy: 0.8762
Epoch 33/50
54/54 [==============================] - ETA: 0s - loss: 0.0589 - accuracy: 0.9837
Epoch 33: val_accuracy did not improve from 0.88318
54/54 [==============================] - 2s 29ms/step - loss: 0.0589 - accuracy: 0.9837 - val_loss: 0.4326 - val_accuracy: 0.8668
Epoch 34/50
54/54 [==============================] - ETA: 0s - loss: 0.0508 - accuracy: 0.9895
Epoch 34: val_accuracy improved from 0.88318 to 0.88785, saving model to ./models\Monkeypox_best_model.h5
54/54 [==============================] - 2s 32ms/step - loss: 0.0508 - accuracy: 0.9895 - val_loss: 0.4585 - val_accuracy: 0.8879
Epoch 35/50
54/54 [==============================] - ETA: 0s - loss: 0.0496 - accuracy: 0.9907
Epoch 35: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 30ms/step - loss: 0.0496 - accuracy: 0.9907 - val_loss: 0.4816 - val_accuracy: 0.8692
Epoch 36/50
54/54 [==============================] - ETA: 0s - loss: 0.0807 - accuracy: 0.9737
Epoch 36: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 29ms/step - loss: 0.0807 - accuracy: 0.9737 - val_loss: 0.4706 - val_accuracy: 0.8621
Epoch 37/50
54/54 [==============================] - ETA: 0s - loss: 0.0688 - accuracy: 0.9755
Epoch 37: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 30ms/step - loss: 0.0688 - accuracy: 0.9755 - val_loss: 0.4468 - val_accuracy: 0.8715
Epoch 38/50
54/54 [==============================] - ETA: 0s - loss: 0.0541 - accuracy: 0.9825
Epoch 38: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 29ms/step - loss: 0.0541 - accuracy: 0.9825 - val_loss: 0.4552 - val_accuracy: 0.8621
Epoch 39/50
53/54 [============================>.] - ETA: 0s - loss: 0.0497 - accuracy: 0.9881
Epoch 39: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 29ms/step - loss: 0.0491 - accuracy: 0.9883 - val_loss: 0.4660 - val_accuracy: 0.8832
Epoch 40/50
53/54 [============================>.] - ETA: 0s - loss: 0.0422 - accuracy: 0.9863
Epoch 40: val_accuracy did not improve from 0.88785
54/54 [==============================] - 2s 28ms/step - loss: 0.0448 - accuracy: 0.9854 - val_loss: 0.4934 - val_accuracy: 0.8808
Epoch 41/50
52/54 [===========================>..] - ETA: 0s - loss: 0.0496 - accuracy: 0.9861
Epoch 41: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0499 - accuracy: 0.9854 - val_loss: 0.4625 - val_accuracy: 0.8715
Epoch 42/50
52/54 [===========================>..] - ETA: 0s - loss: 0.0373 - accuracy: 0.9927
Epoch 42: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 27ms/step - loss: 0.0370 - accuracy: 0.9930 - val_loss: 0.4864 - val_accuracy: 0.8621
Epoch 43/50
53/54 [============================>.] - ETA: 0s - loss: 0.0352 - accuracy: 0.9911
Epoch 43: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 27ms/step - loss: 0.0354 - accuracy: 0.9907 - val_loss: 0.5226 - val_accuracy: 0.8762
Epoch 44/50
54/54 [==============================] - ETA: 0s - loss: 0.0314 - accuracy: 0.9924
Epoch 44: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0314 - accuracy: 0.9924 - val_loss: 0.5197 - val_accuracy: 0.8668
Epoch 45/50
53/54 [============================>.] - ETA: 0s - loss: 0.0361 - accuracy: 0.9911
Epoch 45: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 27ms/step - loss: 0.0356 - accuracy: 0.9912 - val_loss: 0.5102 - val_accuracy: 0.8692
Epoch 46/50
53/54 [============================>.] - ETA: 0s - loss: 0.0221 - accuracy: 0.9970
Epoch 46: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 27ms/step - loss: 0.0222 - accuracy: 0.9971 - val_loss: 0.5320 - val_accuracy: 0.8692
Epoch 47/50
54/54 [==============================] - ETA: 0s - loss: 0.0310 - accuracy: 0.9942
Epoch 47: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0310 - accuracy: 0.9942 - val_loss: 0.5445 - val_accuracy: 0.8645
Epoch 48/50
54/54 [==============================] - ETA: 0s - loss: 0.0263 - accuracy: 0.9930
Epoch 48: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0263 - accuracy: 0.9930 - val_loss: 0.5158 - val_accuracy: 0.8785
Epoch 49/50
53/54 [============================>.] - ETA: 0s - loss: 0.0262 - accuracy: 0.9929
Epoch 49: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0260 - accuracy: 0.9930 - val_loss: 0.5357 - val_accuracy: 0.8785
Epoch 50/50
54/54 [==============================] - ETA: 0s - loss: 0.0209 - accuracy: 0.9947
Epoch 50: val_accuracy did not improve from 0.88785
54/54 [==============================] - 1s 26ms/step - loss: 0.0209 - accuracy: 0.9947 - val_loss: 0.5442 - val_accuracy: 0.8645
10.模型评估
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()
11.指定图片进行预测

新建立一个python文件试试

#!/usr/bin/python  
# @Project  :Tensorflow  
# @Name     : T4-verify.py  
# @Time     :2024/10/7 下午1:39  
# @Email    : changdefeng06@gmail.com  
# @Author   : idefeng  
import tensorflow as tf  
from PIL import Image  
import numpy as np  
  
# 指定图片  
img = Image.open('./datasets/mp/Monkeypox/M01_01_04.jpg')  
img.show()  
image = tf.keras.utils.load_img('./datasets/mp/Monkeypox/M01_01_04.jpg', target_size=(224, 224))  
image_array = tf.keras.utils.img_to_array(image)  # 将PIL对象转换成numpy数组  
  
image_array = tf.expand_dims(image_array, 0)  
  
class_name = ['猴痘', '其他']  
models = tf.keras.models.load_model('./models/Monkeypox_best_model.h5', )  
predictions = models.predict(image_array)  
print("预测结果为:", class_name[np.argmax(predictions)])

运行结果出错了:

ValueError: No model config found in the file at <tensorflow.python.platform.gfile.GFile object at 0x000001E8FB5B6D30>.

难道我们上面保存的"Monkeypox_best_model.h5"不是模型文件,记得我们ModelCheckpoint中其中一个参数设置的是save_weights_only=True.只保存了权重。这是什么意思呢?根据保存和恢复模型 | TensorFlow Core中所描述:

完整的模型应该包含架构,权重和训练配置三部分,而上面我们仅仅保存了weights(权重)显然这个文件不是模型本身,因此推断是这里出现了问题。解决思路:

  1. 新文件中重新构建CNN网络,而且这个网络模型要和上面保存权重的模型结构一致。然后在这个重新构建的CNN网络模型中load_weights来获得之前训练的权重。

  2. 在上面的文件中保存一个完整的模型。
    试试第一种模式:

    #!/usr/bin/python

    @Project :Tensorflow

    @Name : T4-verify.py

    @Time :2024/10/7 下午1:39

    @Email : changdefeng06@gmail.com

    @Author : idefeng

    import tensorflow as tf
    from tensorflow.keras import models, layers
    from PIL import Image
    import numpy as np

    指定图片

    img = Image.open('./datasets/mp/Monkeypox/M01_01_04.jpg')
    img.show()
    image = tf.keras.utils.load_img('./datasets/mp/Monkeypox/M01_01_04.jpg', target_size=(224, 224))
    image_array = tf.keras.utils.img_to_array(image) # 将PIL对象转换成numpy数组

    image_array = tf.expand_dims(image_array, 0)

    class_names = ['猴痘', '其他']

    model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1. / 255, input_shape=(224, 224, 3)),

     layers.Conv2D(16, (3, 3), activation='relu', input_shape=(224, 224, 3)),  # 卷积层1,卷积核3*3  
     layers.AveragePooling2D((2, 2)),  # 池化层1,2*2采样  
     layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3  
     layers.AveragePooling2D((2, 2)),  # 池化层2,2*2采样  
     layers.Dropout(0.3),  
     layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3  
     layers.Dropout(0.3),  
    
     layers.Flatten(),  # Flatten层,连接卷积层与全连接层  
     layers.Dense(128, activation='relu'),  # 全连接层,特征进一步提取  
     layers.Dense(len(class_names))  # 输出层,输出预期结果  
    

    ])
    model.load_weights('./models/Monkeypox_best_model.h5') # 加载权重

    predictions = model.predict(image_array)
    print("预测结果为:", class_names[np.argmax(predictions)])

换一张其他图片预测:

(三)总结

  1. 数据集的格式可以有多种,可以是numpy数组,文本数据,CSV数据,文件数据等;
  2. 数据集加速配置,如何更好的利用CPU时间
  3. 保存模型的要素,结构、权重、配置。仅保存权重并不是模型本身。
相关推荐
吃个糖糖2 分钟前
35 Opencv 亚像素角点检测
人工智能·opencv·计算机视觉
qq_5290252920 分钟前
Torch.gather
python·深度学习·机器学习
IT古董1 小时前
【漫话机器学习系列】017.大O算法(Big-O Notation)
人工智能·机器学习
凯哥是个大帅比1 小时前
人工智能ACA(五)--深度学习基础
人工智能·深度学习
m0_748232921 小时前
DALL-M:基于大语言模型的上下文感知临床数据增强方法 ,补充
人工智能·语言模型·自然语言处理
szxinmai主板定制专家1 小时前
【国产NI替代】基于FPGA的32通道(24bits)高精度终端采集核心板卡
大数据·人工智能·fpga开发
海棠AI实验室1 小时前
AI的进阶之路:从机器学习到深度学习的演变(三)
人工智能·深度学习·机器学习
机器懒得学习2 小时前
基于YOLOv5的智能水域监测系统:从目标检测到自动报告生成
人工智能·yolo·目标检测
QQ同步助手2 小时前
如何正确使用人工智能:开启智慧学习与创新之旅
人工智能·学习·百度