作业记录

作业记录

引言

项目来源是百度飞桨的【飞桨领航团AI达人养成营】的大作业,算是小白的奇文共赏了。第一次接触百度的paddle框架,本来是中文的深度学习框架结果中文论坛活跃度不够、官方文档的表意并不清楚。而且感觉API的设计不是很合理。似乎没有太重视设计原则导致有的地方API过于简单(但也很难定制化)。

本实践旨在通过一个美食分类的案列,让大家理解和掌握如何使用飞桨2.0搭建一个卷积神经网络。
特别提示:本实践所用数据集均来自互联网,请勿用于商务用途。

解压文件,使用train.csv训练,测试使用val.csv。最后以在val上的准确率作为最终分数。

0x01 数据预处理

思考并动手进行调优,以在验证集上的准确率为评价指标,验证集上准确率越高,得分越高!模型大家可以更换,调参技巧任选,代码需要大家自己调通。

1
2
3
4
!unzip -oq /home/aistudio/data/data120156/lemon_homework.zip
!unzip -oq /home/aistudio/lemon_homework/lemon_lesson.zip
!unzip -oq /home/aistudio/lemon_lesson/test_images.zip
!unzip -oq /home/aistudio/lemon_lesson/train_images.zip
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
# 导入所需要的库
from sklearn.utils import shuffle
import os
import pandas as pd
import numpy as np
from PIL import Image

import paddle
import paddle.nn as nn
from paddle.io import Dataset
import paddle.vision.transforms as T
import paddle.nn.functional as F
from paddle.metric import Accuracy

import warnings
warnings.filterwarnings("ignore")
1
2
3
4
5
6
df=pd.read_csv('lemon_lesson/train_images.csv')
d=df['class_num'].hist().get_figure()
# 图像分类竞赛常见难点
# 类别不均衡
# one-shot和few-shot分类
# 细粒度分类

png

图像标准化与归一化,最常见的图像预处理方式有两种,一种是图标标准化处理,将数据按照比例缩放,使之落入一个特定的区间中,将数据通过去均值,实现中心化。第二种是数据归一化,将数据统一映射到0-1区间中
它的作用

  1. 有利于初始化的进行
  2. 避免给梯度数值更新带来数值问题
  3. 有利于学习率数值的调整
  4. 加快寻找最优解速度
1
2
3
4
5
6
7
8
9
10
11
12
# 定义数据预处理

data_transforms = T.Compose([
T.Resize(size=(224, 224)),
T.RandomHorizontalFlip(1),
T.RandomVerticalFlip(1),
T.Transpose(), # HWC -> CHW
T.Normalize(
mean=[0, 0, 0], # 归一化
std=[255, 255, 255],
to_rgb=True)
])

0x02 数据集分割和loader建立

1
2
3
4
5
6
7
8
9
10
11
12
13
14
## 数据集划分

train_images = pd.read_csv('lemon_lesson/train_images.csv', usecols=['id','class_num'])

# 划分训练集和校验集
all_size = len(train_images)
print(all_size)
train_size = int(all_size * 0.8)
train_image_path_list = train_images[:train_size]
val_image_path_list = train_images[train_size:]

print(len(train_image_path_list))
print(len(val_image_path_list))

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
# 构建Dataset
class MyDataset(paddle.io.Dataset):
"""
步骤一:继承paddle.io.Dataset类
"""
def __init__(self, train_list, val_list, mode='train'):
"""
步骤二:实现构造函数,定义数据读取方式
"""
super(MyDataset, self).__init__()
self.data = []
# 借助pandas读取csv文件
self.train_images = train_list
self.test_images = val_list
if mode == 'train':
# 读train_images.csv中的数据
for row in self.train_images.itertuples():
self.data.append(['train_images/'+getattr(row, 'id'), getattr(row, 'class_num')])
else:
# 读test_images.csv中的数据
for row in self.test_images.itertuples():
self.data.append(['train_images/'+getattr(row, 'id'), getattr(row, 'class_num')])

def load_img(self, image_path):
# 实际使用时使用Pillow相关库进行图片读取即可,这里我们对数据先做个模拟
image = Image.open(image_path).convert('RGB')

return image

def __getitem__(self, index):
"""
步骤三:实现__getitem__方法,定义指定index时如何获取数据,并返回单条数据(训练数据,对应的标签)
"""
image = self.load_img(self.data[index][0])
label = self.data[index][1]

return data_transforms(image), np.array(label, dtype='int64')

def __len__(self):
"""
步骤四:实现__len__方法,返回数据集总数目
"""
return len(self.data)

1
2
3
4
5
6
7
8
# 定义数据loader
#train_loader
train_dataset = MyDataset(train_list=train_image_path_list, val_list=val_image_path_list, mode='train')
train_loader = paddle.io.DataLoader(train_dataset, places=paddle.CPUPlace(), batch_size=128, shuffle=True, num_workers=0)

#val_loader
val_dataset =MyDataset(train_list=train_image_path_list, val_list=val_image_path_list, mode='test')
val_loader = paddle.io.DataLoader(val_dataset, places=paddle.CPUPlace(), batch_size=128, shuffle=True, num_workers=0)
1
2
3
4
5
6
7
8
9
10
11
print('=============train dataset=============')
for image, label in train_dataset:
print('image shape: {}, label: {}'.format(image.shape, label))
break

for batch_id, data in enumerate(train_loader()):
x_data = data[0]
y_data = data[1]
print(x_data)
print(y_data)
break
=============train dataset=============
image shape: (3, 224, 224), label: 0
Tensor(shape=[128, 3, 224, 224], dtype=float32, place=CPUPlace, stop_gradient=True,
       [[[[0.10980392, 0.10196079, 0.10588235, ..., 0.10588235, 0.11372549, 0.14117648],
          [0.11372549, 0.10980392, 0.10196079, ..., 0.11372549, 0.14509805, 0.16470589],
          [0.14901961, 0.11764706, 0.10196079, ..., 0.15686275, 0.23137255, 0.25098041],
          ...,


        ...,

          [0.50980395, 0.50980395, 0.50980395, ..., 0.73333335, 0.74117649, 0.73725492],
          [0.50588238, 0.50588238, 0.50588238, ..., 0.72941178, 0.73725492, 0.73725492],
          [0.50196081, 0.50196081, 0.50196081, ..., 0.72156864, 0.73333335, 0.73333335]]]])
Tensor(shape=[128], dtype=int64, place=CPUPlace, stop_gradient=True,
       [0, 0, 2, 2, 1, 2, 2, 1, 0, 0, 0, 1, 0, 0, 3, 2, 0, 0, 0, 0, 1, 1, 0, 2, 2, 2, 3, 2, 0, 1, 2, 0, 0, 0, 1, 2, 1, 0, 0, 1, 0, 1, 2, 0, 1, 1, 0, 0, 3, 0, 2, 0, 3, 3, 2, 2, 1, 3, 3, 2, 1, 0, 0, 1, 0, 1, 0, 2, 0, 0, 0, 1, 0, 0, 0, 1, 3, 0, 0, 3, 0, 0, 2, 0, 0, 3, 0, 3, 3, 1, 2, 3, 0, 0, 0, 1, 0, 2, 0, 1, 2, 3, 3, 3, 2, 0, 0, 0, 1, 2, 0, 2, 3, 0, 0, 0, 0, 2, 0, 1, 0, 1, 3, 0, 1, 1, 0, 0])

0x03 模型选择(唯一收获是特征图计算)

首先肯定是选择自己的baseline啦,这个看上去是VGG?感觉挺土的

理想情况中,模型越大拟合能力越强。图像尺寸越大,保留的信息也越多,在实际情况中模型越复杂训练
时间越长,图像越长尺寸越大训练时间也越长

比赛开始有限使用最简单的resnet,快速跑完整个训练和预测流程,分类模型的选择需要根据任务复杂度来
进行选择,并不是精度越高的模型月适合参加比赛

在实际的比赛中可以逐步增加尺寸,在64-64的尺寸下让模型收敛,进而将模型放到128-128的尺寸爱训练

在选择的过程中baseline应该遵循几点原则

  1. 复杂度地,代码结构简单
  2. loss收敛正确,metric出现提升
  3. 迭代快速,没有很fancy的模型结构、loss function或者图像预处理方法之类的
  4. 需要编写正确并简单的测试脚本,能够提交submission之后获得正确的分数

在网络模型中输入输出根据卷积来就行,如果涉及到调参就需要一些神学概念,比如之前被忽悠惨的感受野。

1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
# Sequential形式组网
class MyNet(paddle.nn.Layer):
def __init__(self, num_classes=4):
super(MyNet, self).__init__()
self.conv1 = paddle.nn.Conv2D(in_channels=3, out_channels=64, kernel_size=(7, 7), stride=2, padding = 3)
self.pool1 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

self.conv2 = paddle.nn.Conv2D(in_channels=64, out_channels=64, kernel_size=(3,3), stride=1, padding = 1)
self.conv3 = paddle.nn.Conv2D(in_channels=64, out_channels=128, kernel_size=(3,3), stride=2, padding = 1)
self.conv4 = paddle.nn.Conv2D(in_channels=128, out_channels=256, kernel_size=(3,3), stride=2, padding = 1)
self.conv5 = paddle.nn.Conv2D(in_channels=256, out_channels=512, kernel_size=(3,3), stride=2, padding = 1)
# # self.pool2 = paddle.nn.MaxPool2D(kernel_size=2, stride=2)

# self.conv3 = paddle.nn.Conv2D(in_channels=448, out_channels=448, kernel_size=(3,3), stride=2, padding = 0)

# self.conv4 = paddle.nn.Conv2D(in_channels=448, out_channels=448, kernel_size=(3,3), stride=2, padding = 1)

self.flatten = paddle.nn.Flatten()
self.linear1 = paddle.nn.Linear(in_features=25088, out_features=64)
self.linear2 = paddle.nn.Linear(in_features=64, out_features=num_classes)

def forward(self, x):
x = self.conv1(x)
x = F.relu(x)
x = self.pool1(x)
print(x.shape)
x = self.conv2(x)
x = F.relu(x)

x = self.conv3(x)
x = F.relu(x)
# # print(x.shape)
x = self.conv4(x)
x = F.relu(x)
x = self.conv5(x)
x = F.relu(x)
# x = self.conv4(x)
# x = F.relu(x)
# # print(x.shape)

x = self.flatten(x)
x = self.linear1(x)
x = F.relu(x)
x = self.linear2(x)
return x

model = paddle.Model(MyNet())
model.summary((1, 3, 224, 224))
[1, 64, 56, 56]
---------------------------------------------------------------------------
 Layer (type)       Input Shape          Output Shape         Param #    
===========================================================================
   Conv2D-21     [[1, 3, 224, 224]]   [1, 64, 112, 112]        9,472     
  MaxPool2D-5   [[1, 64, 112, 112]]    [1, 64, 56, 56]           0       
   Conv2D-22     [[1, 64, 56, 56]]     [1, 64, 56, 56]        36,928     
   Conv2D-23     [[1, 64, 56, 56]]     [1, 128, 28, 28]       73,856     
   Conv2D-24     [[1, 128, 28, 28]]    [1, 256, 14, 14]       295,168    
   Conv2D-25     [[1, 256, 14, 14]]     [1, 512, 7, 7]       1,180,160   
  Flatten-59      [[1, 512, 7, 7]]        [1, 25088]             0       
   Linear-7         [[1, 25088]]           [1, 64]           1,605,696   
   Linear-8          [[1, 64]]              [1, 4]              260      
===========================================================================
Total params: 3,201,540
Trainable params: 3,201,540
Non-trainable params: 0
---------------------------------------------------------------------------
Input size (MB): 0.57
Forward/backward pass size (MB): 10.72
Params size (MB): 12.21
Estimated Total Size (MB): 23.51
---------------------------------------------------------------------------






{'total_params': 3201540, 'trainable_params': 3201540}
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
# 注意过程中特征图的计算
# 定义优化器
optim = paddle.optimizer.Adam(learning_rate=0.001, parameters=model.parameters())
model.prepare(
optim,
paddle.nn.CrossEntropyLoss(),
Accuracy()
)

from visualdl import LogReader, LogWriter

args={
'logdir':'./vdl',
'file_name':'vdlrecords.model.log',
'iters':0,
}

# 配置visualdl
write = LogWriter(logdir=args['logdir'], file_name=args['file_name'])
#iters 初始化为0
iters = args['iters']

#自定义Callback
class Callbk(paddle.callbacks.Callback):
def __init__(self, write, iters=0):
self.write = write
self.iters = iters

def on_train_batch_end(self, step, logs):

self.iters += 1

#记录loss
self.write.add_scalar(tag="loss",step=self.iters,value=logs['loss'][0])
#记录 accuracy
self.write.add_scalar(tag="acc",step=self.iters,value=logs['acc'])
`./vdl/vdlrecords.model.log` is exists, VisualDL will add logs to it.
1
2
3
4
5
6
7
8
9
# 模型训练与评估
model.fit(train_loader,
val_loader,
log_freq=1,
epochs=15,
callbacks=Callbk(write=write, iters=iters),
verbose=1,
)

The loss value printed in the log is the current step, and the metric is the average value of previous step.
Epoch 1/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 1.4849 - acc: 0.2266 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 6.7525 - acc: 0.2930 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 1.8031 - acc: 0.2708 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 1.3808 - acc: 0.2480 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 1.2969 - acc: 0.2750 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 1.2021 - acc: 0.3164 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 1.1513 - acc: 0.3360 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 1.0067 - acc: 0.6797 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 1.0514 - acc: 0.6606 - 1s/step          
Eval samples: 221
Epoch 2/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.9772 - acc: 0.7734 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.8334 - acc: 0.7734 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.6969 - acc: 0.7604 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 2.3449 - acc: 0.6484 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 1.3612 - acc: 0.6516 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 1.5054 - acc: 0.6380 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 1.1878 - acc: 0.6436 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.9094 - acc: 0.7266 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.8898 - acc: 0.7195 - 1s/step          
Eval samples: 221
Epoch 3/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.9101 - acc: 0.6641 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.8710 - acc: 0.6836 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.8483 - acc: 0.6875 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.8448 - acc: 0.7031 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.8475 - acc: 0.7063 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.7537 - acc: 0.7122 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.6307 - acc: 0.7185 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.6840 - acc: 0.7891 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.6577 - acc: 0.7783 - 1s/step          
Eval samples: 221
Epoch 4/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.7030 - acc: 0.7344 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.4887 - acc: 0.8203 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.5533 - acc: 0.8333 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.4893 - acc: 0.8438 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.5349 - acc: 0.8375 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.3818 - acc: 0.8411 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.2702 - acc: 0.8490 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.5461 - acc: 0.8125 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.3558 - acc: 0.8416 - 1s/step          
Eval samples: 221
Epoch 5/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.4032 - acc: 0.8281 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.2782 - acc: 0.8828 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.2537 - acc: 0.8932 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.3106 - acc: 0.9004 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.3652 - acc: 0.8953 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.2224 - acc: 0.8984 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.3256 - acc: 0.8990 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.3408 - acc: 0.8438 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.3114 - acc: 0.8326 - 1s/step          
Eval samples: 221
Epoch 6/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.2124 - acc: 0.9062 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.1708 - acc: 0.9258 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.2870 - acc: 0.9167 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.1881 - acc: 0.9180 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.3036 - acc: 0.9156 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.3152 - acc: 0.9089 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1624 - acc: 0.9092 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.3151 - acc: 0.9141 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.3244 - acc: 0.9050 - 1s/step          
Eval samples: 221
Epoch 7/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.4050 - acc: 0.8750 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.1410 - acc: 0.9023 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.2465 - acc: 0.9089 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.2190 - acc: 0.9121 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.1823 - acc: 0.9109 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.2198 - acc: 0.9102 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.2194 - acc: 0.9115 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.2905 - acc: 0.8672 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2588 - acc: 0.8733 - 1s/step          
Eval samples: 221
Epoch 8/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1634 - acc: 0.9062 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.1506 - acc: 0.9219 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.1242 - acc: 0.9297 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.3063 - acc: 0.9199 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.1786 - acc: 0.9219 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.1867 - acc: 0.9232 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1735 - acc: 0.9262 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.2529 - acc: 0.8750 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2749 - acc: 0.8688 - 1s/step          
Eval samples: 221
Epoch 9/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1865 - acc: 0.9141 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.2470 - acc: 0.9023 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.0620 - acc: 0.9297 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.1055 - acc: 0.9375 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.2145 - acc: 0.9344 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.0945 - acc: 0.9388 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1622 - acc: 0.9387 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.2351 - acc: 0.8906 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2440 - acc: 0.9005 - 1s/step          
Eval samples: 221
Epoch 10/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1562 - acc: 0.9375 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.1392 - acc: 0.9531 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.1352 - acc: 0.9479 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.0883 - acc: 0.9551 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.1463 - acc: 0.9531 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.1493 - acc: 0.9505 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1267 - acc: 0.9523 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.2258 - acc: 0.9062 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.1954 - acc: 0.9186 - 1s/step          
Eval samples: 221
Epoch 11/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1226 - acc: 0.9531 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.1305 - acc: 0.9453 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.1340 - acc: 0.9453 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.0711 - acc: 0.9551 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.0544 - acc: 0.9609 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.1100 - acc: 0.9596 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1838 - acc: 0.9569 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.1604 - acc: 0.9531 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2993 - acc: 0.9367 - 1s/step          
Eval samples: 221
Epoch 12/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1236 - acc: 0.9453 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.0547 - acc: 0.9688 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.2062 - acc: 0.9505 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.1316 - acc: 0.9492 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.0759 - acc: 0.9563 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.0801 - acc: 0.9609 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.2175 - acc: 0.9580 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.2175 - acc: 0.9219 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.1687 - acc: 0.9367 - 1s/step          
Eval samples: 221
Epoch 13/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.1188 - acc: 0.9531 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.0690 - acc: 0.9609 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.1382 - acc: 0.9557 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.1006 - acc: 0.9551 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.0434 - acc: 0.9625 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.0742 - acc: 0.9635 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.1249 - acc: 0.9637 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.1632 - acc: 0.9453 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2240 - acc: 0.9412 - 1s/step          
Eval samples: 221
Epoch 14/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.0547 - acc: 0.9766 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.0427 - acc: 0.9844 - ETA: 6s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.0513 - acc: 0.9818 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.0545 - acc: 0.9824 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.0608 - acc: 0.9812 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.0513 - acc: 0.9831 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.0402 - acc: 0.9852 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.1093 - acc: 0.9531 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2143 - acc: 0.9548 - 1s/step          
Eval samples: 221
Epoch 15/15
[128, 64, 56, 56]
step 1/7 [===>..........................] - loss: 0.0608 - acc: 0.9766 - ETA: 7s - 1s/step[128, 64, 56, 56]
step 2/7 [=======>......................] - loss: 0.0476 - acc: 0.9805 - ETA: 5s - 1s/step[128, 64, 56, 56]
step 3/7 [===========>..................] - loss: 0.0400 - acc: 0.9844 - ETA: 4s - 1s/step[128, 64, 56, 56]
step 4/7 [================>.............] - loss: 0.0271 - acc: 0.9863 - ETA: 3s - 1s/step[128, 64, 56, 56]
step 5/7 [====================>.........] - loss: 0.0198 - acc: 0.9891 - ETA: 2s - 1s/step[128, 64, 56, 56]
step 6/7 [========================>.....] - loss: 0.0367 - acc: 0.9896 - ETA: 1s - 1s/step[113, 64, 56, 56]
step 7/7 [==============================] - loss: 0.0152 - acc: 0.9909 - 1s/step          
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.1904 - acc: 0.9609 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.2047 - acc: 0.9502 - 1s/step          
Eval samples: 221
1
2
result = model.evaluate(val_loader,batch_size=32,log_freq=1, verbose=1, num_workers=0, callbacks=None)
print(result)
Eval begin...
The loss value printed in the log is the current batch, and the metric is the average value of previous step.
[128, 64, 56, 56]
step 1/2 [==============>...............] - loss: 0.0984 - acc: 0.9688 - ETA: 1s - 1s/step[93, 64, 56, 56]
step 2/2 [==============================] - loss: 0.3313 - acc: 0.9502 - 1s/step          
Eval samples: 221
{'loss': [0.33130556], 'acc': 0.9502262443438914}

0x04 one more thing

GitHub的学生礼包里面有好多好东西,digital ocean有100刀的额度,绑定PayPal就可以使用啦。国外的服务器还是有一些小作用的


作业记录
https://blog.tjdata.site/posts/eac26b.html
作者
chenxia
发布于
2022年8月20日
许可协议