T4 TensorFlow入门实战——猴痘病识别

一、前期准备

1. 导入数据

python 复制代码
import numpy as np
import PIL,pathlib
from PIL import Image
import matplotlib.pyplot as plt
import numpy as np
import tensorflow as tf
from tensorflow import keras
from tensorflow.keras import layers,models
from tensorflow.keras.callbacks import ModelCheckpoint


# load the data
data_dir = './data/4_data/'
data_dir = pathlib.Path(data_dir)

data_paths = list(data_dir.glob('*'))
classeNames = [str(path).split("\\")[2] for path in data_paths]
classeNames

2. 查看数据

python 复制代码
image_count = len(list(data_dir.glob('*/*.jpg')))

print("图片总数为:",image_count)
python 复制代码
Monkeypox = list(data_dir.glob('Monkeypox/*.jpg'))
PIL.Image.open(str(Monkeypox[0]))

二、数据预处理

1. 加载数据

测试集与验证集的关系:

1验证集并没有参与训练过程梯度下降过程的,狭义上来讲是没有参与模型的参数训练更新的。

2但是广义上来讲,验证集存在的意义确实参与了一个"人工调参"的过程,我们根据每一个epoch训练之后模型在valid data上的表现来决定是否需要训练进行early stop,或者根据这个过程模型的性能变化来调整模型的超参数,如学习率,batch_size等等。

3因此,我们也可以认为,验证集也参与了训练,但是并没有使得模型去overfit验证集

python 复制代码
# Data loading and preprocessing
batch_size = 32
img_height = 224
img_width = 224

"""
关于image_dataset_from_directory()的详细介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/117018789
"""
train_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="training",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
python 复制代码
val_ds = tf.keras.preprocessing.image_dataset_from_directory(
    data_dir,
    validation_split=0.2,
    subset="validation",
    seed=123,
    image_size=(img_height, img_width),
    batch_size=batch_size)
python 复制代码
class_names = train_ds.class_names
print(class_names)

2. 可视化数据

python 复制代码
# Visualize the data
plt.figure(figsize=(20, 10))

for images, labels in train_ds.take(1): # take()方法用于从数据集中取出一批数据
    for i in range(20):
        ax = plt.subplot(5, 10, i + 1)

        plt.imshow(images[i].numpy().astype("uint8"))
        plt.title(class_names[labels[i]])
        
        plt.axis("off")
python 复制代码
# Check the shape of the data
for image_batch, labels_batch in train_ds:
    print(image_batch.shape)
    print(labels_batch.shape)
    break

3. 配置数据集

python 复制代码
AUTOTUNE = tf.data.AUTOTUNE

train_ds = train_ds.cache().shuffle(1000).prefetch(buffer_size=AUTOTUNE)
val_ds = val_ds.cache().prefetch(buffer_size=AUTOTUNE)

三、训练模型

1. 构建CNN网络模型

卷积神经网络(CNN)的输入是张量 (Tensor) 形式的 (image_height, image_width, color_channels),包含了图像高度、宽度及颜色信息。不需要输入batch size。color_channels 为 (R,G,B) 分别对应 RGB 的三个颜色通道(color channel)。在此示例中,我们的 CNN 输入的形状是 (224, 224, 3)即彩色图像。我们需要在声明第一层时将形状赋值给参数input_shape

python 复制代码
# Define the model architecture
num_classes = 2

"""
关于卷积核的计算不懂的可以参考文章:https://blog.csdn.net/qq_38251616/article/details/114278995

layers.Dropout(0.4) 作用是防止过拟合,提高模型的泛化能力。
在上一篇文章花朵识别中,训练准确率与验证准确率相差巨大就是由于模型过拟合导致的

关于Dropout层的更多介绍可以参考文章:https://mtyjkh.blog.csdn.net/article/details/115826689
"""

model = models.Sequential([
    layers.experimental.preprocessing.Rescaling(1./255, input_shape=(img_height, img_width, 3)),
    
    layers.Conv2D(16, (3, 3), activation='relu', input_shape=(img_height, img_width, 3)), # 卷积层1,卷积核3*3  
    layers.AveragePooling2D((2, 2)),               # 池化层1,2*2采样
    layers.Conv2D(32, (3, 3), activation='relu'),  # 卷积层2,卷积核3*3
    layers.AveragePooling2D((2, 2)),               # 池化层2,2*2采样
    layers.Dropout(0.3),  
    layers.Conv2D(64, (3, 3), activation='relu'),  # 卷积层3,卷积核3*3
    layers.Dropout(0.3),  
    
    layers.Flatten(),                       # Flatten层,连接卷积层与全连接层
    layers.Dense(128, activation='relu'),   # 全连接层,特征进一步提取
    layers.Dense(num_classes)               # 输出层,输出预期结果
])

model.summary()  # 打印网络结构

2. 编译模型

python 复制代码
"""
这里设置优化器、损失函数以及metrics
"""
# model.compile()方法用于在配置训练方法时,告知训练时用的优化器、损失函数和准确率评测标准
opt = tf.keras.optimizers.Adam(learning_rate=1e-4)

model.compile(optimizer=opt,
              loss=tf.keras.losses.SparseCategoricalCrossentropy(from_logits=True),
              metrics=['accuracy'])

3. 训练模型

python 复制代码
epochs = 50

checkpointer = ModelCheckpoint('best_model.h5',
                                monitor='val_accuracy',
                                verbose=1,
                                save_best_only=True,
                                save_weights_only=True)

history = model.fit(train_ds,
                    validation_data=val_ds,
                    epochs=epochs,
                    callbacks=[checkpointer])
bash 复制代码
Epoch 1/50
54/54 [==============================] - ETA: 0s - loss: 0.7423 - accuracy: 0.5420
Epoch 1: val_accuracy improved from -inf to 0.54206, saving model to best_model.h5
54/54 [==============================] - 58s 1s/step - loss: 0.7423 - accuracy: 0.5420 - val_loss: 0.6748 - val_accuracy: 0.5421
Epoch 2/50
54/54 [==============================] - ETA: 0s - loss: 0.6608 - accuracy: 0.6039
Epoch 2: val_accuracy improved from 0.54206 to 0.55374, saving model to best_model.h5
54/54 [==============================] - 77s 1s/step - loss: 0.6608 - accuracy: 0.6039 - val_loss: 0.7327 - val_accuracy: 0.5537
Epoch 3/50
54/54 [==============================] - ETA: 0s - loss: 0.6398 - accuracy: 0.6313
Epoch 3: val_accuracy improved from 0.55374 to 0.58645, saving model to best_model.h5
54/54 [==============================] - 114s 2s/step - loss: 0.6398 - accuracy: 0.6313 - val_loss: 0.6823 - val_accuracy: 0.5864
Epoch 4/50
54/54 [==============================] - ETA: 0s - loss: 0.6048 - accuracy: 0.6727
Epoch 4: val_accuracy improved from 0.58645 to 0.62850, saving model to best_model.h5
54/54 [==============================] - 100s 2s/step - loss: 0.6048 - accuracy: 0.6727 - val_loss: 0.6461 - val_accuracy: 0.6285
Epoch 5/50
54/54 [==============================] - ETA: 0s - loss: 0.5744 - accuracy: 0.7118
Epoch 5: val_accuracy improved from 0.62850 to 0.68458, saving model to best_model.h5
54/54 [==============================] - 104s 2s/step - loss: 0.5744 - accuracy: 0.7118 - val_loss: 0.6060 - val_accuracy: 0.6846
Epoch 6/50
54/54 [==============================] - ETA: 0s - loss: 0.5359 - accuracy: 0.7392
Epoch 6: val_accuracy improved from 0.68458 to 0.74533, saving model to best_model.h5
54/54 [==============================] - 91s 2s/step - loss: 0.5359 - accuracy: 0.7392 - val_loss: 0.5220 - val_accuracy: 0.7453
Epoch 7/50
54/54 [==============================] - ETA: 0s - loss: 0.4847 - accuracy: 0.7690
Epoch 7: val_accuracy improved from 0.74533 to 0.78505, saving model to best_model.h5
54/54 [==============================] - 93s 2s/step - loss: 0.4847 - accuracy: 0.7690 - val_loss: 0.4718 - val_accuracy: 0.7850
Epoch 8/50
54/54 [==============================] - ETA: 0s - loss: 0.4519 - accuracy: 0.7888
Epoch 8: val_accuracy did not improve from 0.78505
54/54 [==============================] - 105s 2s/step - loss: 0.4519 - accuracy: 0.7888 - val_loss: 0.4581 - val_accuracy: 0.7804
Epoch 9/50
54/54 [==============================] - ETA: 0s - loss: 0.4004 - accuracy: 0.8267
Epoch 9: val_accuracy did not improve from 0.78505
54/54 [==============================] - 80s 1s/step - loss: 0.4004 - accuracy: 0.8267 - val_loss: 0.4439 - val_accuracy: 0.7710
Epoch 10/50
54/54 [==============================] - ETA: 0s - loss: 0.3667 - accuracy: 0.8431
Epoch 10: val_accuracy improved from 0.78505 to 0.82944, saving model to best_model.h5
54/54 [==============================] - 96s 2s/step - loss: 0.3667 - accuracy: 0.8431 - val_loss: 0.3989 - val_accuracy: 0.8294
Epoch 11/50
54/54 [==============================] - ETA: 0s - loss: 0.3331 - accuracy: 0.8623
Epoch 11: val_accuracy did not improve from 0.82944
54/54 [==============================] - 74s 1s/step - loss: 0.3331 - accuracy: 0.8623 - val_loss: 0.4115 - val_accuracy: 0.8294
Epoch 12/50
54/54 [==============================] - ETA: 0s - loss: 0.3408 - accuracy: 0.8518
Epoch 12: val_accuracy did not improve from 0.82944
54/54 [==============================] - 88s 2s/step - loss: 0.3408 - accuracy: 0.8518 - val_loss: 0.5275 - val_accuracy: 0.7547
Epoch 13/50
54/54 [==============================] - ETA: 0s - loss: 0.3170 - accuracy: 0.8821
Epoch 13: val_accuracy improved from 0.82944 to 0.83411, saving model to best_model.h5
54/54 [==============================] - 70s 1s/step - loss: 0.3170 - accuracy: 0.8821 - val_loss: 0.3725 - val_accuracy: 0.8341
Epoch 14/50
54/54 [==============================] - ETA: 0s - loss: 0.2625 - accuracy: 0.9026
Epoch 14: val_accuracy improved from 0.83411 to 0.84813, saving model to best_model.h5
54/54 [==============================] - 96s 2s/step - loss: 0.2625 - accuracy: 0.9026 - val_loss: 0.3590 - val_accuracy: 0.8481
Epoch 15/50
54/54 [==============================] - ETA: 0s - loss: 0.2414 - accuracy: 0.9061
Epoch 15: val_accuracy did not improve from 0.84813
54/54 [==============================] - 81s 2s/step - loss: 0.2414 - accuracy: 0.9061 - val_loss: 0.3652 - val_accuracy: 0.8411
Epoch 16/50
54/54 [==============================] - ETA: 0s - loss: 0.2450 - accuracy: 0.9008
Epoch 16: val_accuracy did not improve from 0.84813
54/54 [==============================] - 79s 1s/step - loss: 0.2450 - accuracy: 0.9008 - val_loss: 0.3939 - val_accuracy: 0.8294
Epoch 17/50
54/54 [==============================] - ETA: 0s - loss: 0.2208 - accuracy: 0.9131
Epoch 17: val_accuracy improved from 0.84813 to 0.86449, saving model to best_model.h5
54/54 [==============================] - 82s 2s/step - loss: 0.2208 - accuracy: 0.9131 - val_loss: 0.3642 - val_accuracy: 0.8645
Epoch 18/50
54/54 [==============================] - ETA: 0s - loss: 0.2206 - accuracy: 0.9154
Epoch 18: val_accuracy did not improve from 0.86449
54/54 [==============================] - 73s 1s/step - loss: 0.2206 - accuracy: 0.9154 - val_loss: 0.3686 - val_accuracy: 0.8458
Epoch 19/50
54/54 [==============================] - ETA: 0s - loss: 0.1874 - accuracy: 0.9352
Epoch 19: val_accuracy improved from 0.86449 to 0.86916, saving model to best_model.h5
54/54 [==============================] - 82s 2s/step - loss: 0.1874 - accuracy: 0.9352 - val_loss: 0.3477 - val_accuracy: 0.8692
Epoch 20/50
54/54 [==============================] - ETA: 0s - loss: 0.1739 - accuracy: 0.9364
Epoch 20: val_accuracy did not improve from 0.86916
54/54 [==============================] - 66s 1s/step - loss: 0.1739 - accuracy: 0.9364 - val_loss: 0.4012 - val_accuracy: 0.8551
Epoch 21/50
54/54 [==============================] - ETA: 0s - loss: 0.1810 - accuracy: 0.9306
Epoch 21: val_accuracy did not improve from 0.86916
54/54 [==============================] - 82s 2s/step - loss: 0.1810 - accuracy: 0.9306 - val_loss: 0.3377 - val_accuracy: 0.8645
Epoch 22/50
54/54 [==============================] - ETA: 0s - loss: 0.1492 - accuracy: 0.9469
Epoch 22: val_accuracy improved from 0.86916 to 0.87150, saving model to best_model.h5
54/54 [==============================] - 75s 1s/step - loss: 0.1492 - accuracy: 0.9469 - val_loss: 0.3442 - val_accuracy: 0.8715
Epoch 23/50
54/54 [==============================] - ETA: 0s - loss: 0.1589 - accuracy: 0.9434
Epoch 23: val_accuracy did not improve from 0.87150
54/54 [==============================] - 71s 1s/step - loss: 0.1589 - accuracy: 0.9434 - val_loss: 0.3921 - val_accuracy: 0.8668
Epoch 24/50
54/54 [==============================] - ETA: 0s - loss: 0.1557 - accuracy: 0.9411
Epoch 24: val_accuracy did not improve from 0.87150
54/54 [==============================] - 84s 2s/step - loss: 0.1557 - accuracy: 0.9411 - val_loss: 0.3594 - val_accuracy: 0.8692
Epoch 25/50
54/54 [==============================] - ETA: 0s - loss: 0.1297 - accuracy: 0.9603
Epoch 25: val_accuracy improved from 0.87150 to 0.88551, saving model to best_model.h5
54/54 [==============================] - 67s 1s/step - loss: 0.1297 - accuracy: 0.9603 - val_loss: 0.3609 - val_accuracy: 0.8855
Epoch 26/50
54/54 [==============================] - ETA: 0s - loss: 0.1236 - accuracy: 0.9551
Epoch 26: val_accuracy did not improve from 0.88551
54/54 [==============================] - 85s 2s/step - loss: 0.1236 - accuracy: 0.9551 - val_loss: 0.3500 - val_accuracy: 0.8785
Epoch 27/50
54/54 [==============================] - ETA: 0s - loss: 0.1277 - accuracy: 0.9557
Epoch 27: val_accuracy did not improve from 0.88551
54/54 [==============================] - 83s 2s/step - loss: 0.1277 - accuracy: 0.9557 - val_loss: 0.3584 - val_accuracy: 0.8808
Epoch 28/50
54/54 [==============================] - ETA: 0s - loss: 0.1212 - accuracy: 0.9609
Epoch 28: val_accuracy did not improve from 0.88551
54/54 [==============================] - 70s 1s/step - loss: 0.1212 - accuracy: 0.9609 - val_loss: 0.3570 - val_accuracy: 0.8738
Epoch 29/50
54/54 [==============================] - ETA: 0s - loss: 0.1011 - accuracy: 0.9662
Epoch 29: val_accuracy did not improve from 0.88551
54/54 [==============================] - 86s 2s/step - loss: 0.1011 - accuracy: 0.9662 - val_loss: 0.3814 - val_accuracy: 0.8738
Epoch 30/50
54/54 [==============================] - ETA: 0s - loss: 0.0960 - accuracy: 0.9691
Epoch 30: val_accuracy improved from 0.88551 to 0.88785, saving model to best_model.h5
54/54 [==============================] - 64s 1s/step - loss: 0.0960 - accuracy: 0.9691 - val_loss: 0.3985 - val_accuracy: 0.8879
Epoch 31/50
54/54 [==============================] - ETA: 0s - loss: 0.1012 - accuracy: 0.9644
Epoch 31: val_accuracy did not improve from 0.88785
54/54 [==============================] - 82s 2s/step - loss: 0.1012 - accuracy: 0.9644 - val_loss: 0.4143 - val_accuracy: 0.8668
Epoch 32/50
54/54 [==============================] - ETA: 0s - loss: 0.0774 - accuracy: 0.9778
Epoch 32: val_accuracy did not improve from 0.88785
54/54 [==============================] - 66s 1s/step - loss: 0.0774 - accuracy: 0.9778 - val_loss: 0.4387 - val_accuracy: 0.8598
Epoch 33/50
54/54 [==============================] - ETA: 0s - loss: 0.0829 - accuracy: 0.9720
Epoch 33: val_accuracy did not improve from 0.88785
54/54 [==============================] - 82s 2s/step - loss: 0.0829 - accuracy: 0.9720 - val_loss: 0.4072 - val_accuracy: 0.8785
Epoch 34/50
54/54 [==============================] - ETA: 0s - loss: 0.0812 - accuracy: 0.9691
Epoch 34: val_accuracy did not improve from 0.88785
54/54 [==============================] - 73s 1s/step - loss: 0.0812 - accuracy: 0.9691 - val_loss: 0.4006 - val_accuracy: 0.8832
Epoch 35/50
54/54 [==============================] - ETA: 0s - loss: 0.0609 - accuracy: 0.9860
Epoch 35: val_accuracy did not improve from 0.88785
54/54 [==============================] - 81s 1s/step - loss: 0.0609 - accuracy: 0.9860 - val_loss: 0.4124 - val_accuracy: 0.8808
Epoch 36/50
54/54 [==============================] - ETA: 0s - loss: 0.0614 - accuracy: 0.9831
Epoch 36: val_accuracy did not improve from 0.88785
54/54 [==============================] - 74s 1s/step - loss: 0.0614 - accuracy: 0.9831 - val_loss: 0.4004 - val_accuracy: 0.8879
Epoch 37/50
54/54 [==============================] - ETA: 0s - loss: 0.0579 - accuracy: 0.9860
Epoch 37: val_accuracy did not improve from 0.88785
54/54 [==============================] - 80s 1s/step - loss: 0.0579 - accuracy: 0.9860 - val_loss: 0.4944 - val_accuracy: 0.8738
Epoch 38/50
54/54 [==============================] - ETA: 0s - loss: 0.0773 - accuracy: 0.9737
Epoch 38: val_accuracy did not improve from 0.88785
54/54 [==============================] - 67s 1s/step - loss: 0.0773 - accuracy: 0.9737 - val_loss: 0.3970 - val_accuracy: 0.8832
Epoch 39/50
54/54 [==============================] - ETA: 0s - loss: 0.0589 - accuracy: 0.9831
Epoch 39: val_accuracy did not improve from 0.88785
54/54 [==============================] - 78s 1s/step - loss: 0.0589 - accuracy: 0.9831 - val_loss: 0.4350 - val_accuracy: 0.8855
Epoch 40/50
54/54 [==============================] - ETA: 0s - loss: 0.0552 - accuracy: 0.9842
Epoch 40: val_accuracy did not improve from 0.88785
54/54 [==============================] - 87s 2s/step - loss: 0.0552 - accuracy: 0.9842 - val_loss: 0.4309 - val_accuracy: 0.8855
Epoch 41/50
54/54 [==============================] - ETA: 0s - loss: 0.0466 - accuracy: 0.9860
Epoch 41: val_accuracy did not improve from 0.88785
54/54 [==============================] - 76s 1s/step - loss: 0.0466 - accuracy: 0.9860 - val_loss: 0.4608 - val_accuracy: 0.8762
Epoch 42/50
54/54 [==============================] - ETA: 0s - loss: 0.0700 - accuracy: 0.9732
Epoch 42: val_accuracy improved from 0.88785 to 0.89019, saving model to best_model.h5
54/54 [==============================] - 76s 1s/step - loss: 0.0700 - accuracy: 0.9732 - val_loss: 0.4174 - val_accuracy: 0.8902
Epoch 43/50
54/54 [==============================] - ETA: 0s - loss: 0.0418 - accuracy: 0.9889
Epoch 43: val_accuracy did not improve from 0.89019
54/54 [==============================] - 70s 1s/step - loss: 0.0418 - accuracy: 0.9889 - val_loss: 0.4557 - val_accuracy: 0.8808
Epoch 44/50
54/54 [==============================] - ETA: 0s - loss: 0.0358 - accuracy: 0.9918
Epoch 44: val_accuracy improved from 0.89019 to 0.89252, saving model to best_model.h5
54/54 [==============================] - 86s 2s/step - loss: 0.0358 - accuracy: 0.9918 - val_loss: 0.4231 - val_accuracy: 0.8925
Epoch 45/50
54/54 [==============================] - ETA: 0s - loss: 0.0448 - accuracy: 0.9854
Epoch 45: val_accuracy did not improve from 0.89252
54/54 [==============================] - 73s 1s/step - loss: 0.0448 - accuracy: 0.9854 - val_loss: 0.4533 - val_accuracy: 0.8832
Epoch 46/50
54/54 [==============================] - ETA: 0s - loss: 0.0373 - accuracy: 0.9907
Epoch 46: val_accuracy improved from 0.89252 to 0.89720, saving model to best_model.h5
54/54 [==============================] - 83s 2s/step - loss: 0.0373 - accuracy: 0.9907 - val_loss: 0.4303 - val_accuracy: 0.8972
Epoch 47/50
54/54 [==============================] - ETA: 0s - loss: 0.0318 - accuracy: 0.9930
Epoch 47: val_accuracy did not improve from 0.89720
54/54 [==============================] - 76s 1s/step - loss: 0.0318 - accuracy: 0.9930 - val_loss: 0.4530 - val_accuracy: 0.8879
Epoch 48/50
54/54 [==============================] - ETA: 0s - loss: 0.0355 - accuracy: 0.9918
Epoch 48: val_accuracy did not improve from 0.89720
54/54 [==============================] - 77s 1s/step - loss: 0.0355 - accuracy: 0.9918 - val_loss: 0.4625 - val_accuracy: 0.8785
Epoch 49/50
54/54 [==============================] - ETA: 0s - loss: 0.0320 - accuracy: 0.9907
Epoch 49: val_accuracy did not improve from 0.89720
54/54 [==============================] - 81s 1s/step - loss: 0.0320 - accuracy: 0.9907 - val_loss: 0.4843 - val_accuracy: 0.8902
Epoch 50/50
54/54 [==============================] - ETA: 0s - loss: 0.0347 - accuracy: 0.9895
Epoch 50: val_accuracy did not improve from 0.89720
54/54 [==============================] - 76s 1s/step - loss: 0.0347 - accuracy: 0.9895 - val_loss: 0.4925 - val_accuracy: 0.8925

四、模型评估

1. Loss与Accuracy图

python 复制代码
acc = history.history['accuracy']
val_acc = history.history['val_accuracy']

loss = history.history['loss']
val_loss = history.history['val_loss']

epochs_range = range(epochs)

plt.figure(figsize=(12, 4))
plt.subplot(1, 2, 1)
plt.plot(epochs_range, acc, label='Training Accuracy')
plt.plot(epochs_range, val_acc, label='Validation Accuracy')
plt.legend(loc='lower right')
plt.title('Training and Validation Accuracy')

plt.subplot(1, 2, 2)
plt.plot(epochs_range, loss, label='Training Loss')
plt.plot(epochs_range, val_loss, label='Validation Loss')
plt.legend(loc='upper right')
plt.title('Training and Validation Loss')
plt.show()

2. 指定图片进行预测

python 复制代码
# 加载效果最好的模型权重
model.load_weights('best_model.h5')

img = Image.open("./data/4_data/Others/NM15_02_11.jpg")  #这里选择你需要预测的图片
image = tf.image.resize(img, [img_height, img_width])

img_array = tf.expand_dims(image, 0) 

predictions = model.predict(img_array) # 这里选用你已经训练好的模型
print("预测结果为:",class_names[np.argmax(predictions)])
相关推荐
老刘莱国瑞6 分钟前
STM32 与 AS608 指纹模块的调试与应用
python·物联网·阿里云
湫ccc9 分钟前
《Opencv》基础操作详解(3)
人工智能·opencv·计算机视觉
Jack_pirate18 分钟前
深度学习中的特征到底是什么?
人工智能·深度学习
微凉的衣柜33 分钟前
微软在AI时代的战略布局和挑战
人工智能·深度学习·microsoft
GocNeverGiveUp1 小时前
机器学习1-简单神经网络
人工智能·机器学习
Schwertlilien1 小时前
图像处理-Ch2-空间域的图像增强
人工智能
一只敲代码的猪1 小时前
Llama 3 模型系列解析(一)
大数据·python·llama
智慧化智能化数字化方案1 小时前
深入解读数据资产化实践指南(2024年)
大数据·人工智能·数据资产管理·数据资产入表·数据资产化实践指南
哦哦~9211 小时前
深度学习驱动的油气开发技术与应用
大数据·人工智能·深度学习·学习
Hello_WOAIAI2 小时前
批量将 Word 文件转换为 HTML:Python 实现指南
python·html·word