Tensorflow2(预课程)---8.1、cifar100分类-层方式


Tensorflow2(预课程)---8.1、cifar100分类-层方式

一、总结

一句话总结:

全连接神经网络做cifar100分类不行,简单测试一下,准确率才20%,需要换别的神经网络

 

 

二、cifar100分类-层方式

博客对应课程的视频位置:

 

步骤

1、读取数据集
2、拆分数据集(拆分成训练数据集和测试数据集)
3、构建模型
4、训练模型
5、检验模型

需求

cifar100(物品分类)


cifar100这个数据集就像CIFAR-10,除了它有100个类,每个类包含600个图像。,每类各有500个训练图像和100个测试图像。CIFAR-100中的100个类被分成20个超类。每个图像都带有一个“精细”标签(它所属的类)和一个“粗糙”标签(它所属的超类) 以下是CIFAR-100中的类别列表:

超类 类别
水生哺乳动物 海狸,海豚,水獭,海豹,鲸鱼
水族馆的鱼,比目鱼,射线,鲨鱼,鳟鱼
花卉 兰花,罂粟花,玫瑰,向日葵,郁金香
食品容器 瓶子,碗,罐子,杯子,盘子
水果和蔬菜 苹果,蘑菇,橘子,梨,甜椒
家用电器 时钟,电脑键盘,台灯,电话机,电视机
家用家具 床,椅子,沙发,桌子,衣柜
昆虫 蜜蜂,甲虫,蝴蝶,毛虫,蟑螂
大型食肉动物 熊,豹,狮子,老虎,狼
大型人造户外用品 桥,城堡,房子,路,摩天大楼
大自然的户外场景 云,森林,山,平原,海
大杂食动物和食草动物 骆驼,牛,黑猩猩,大象,袋鼠
中型哺乳动物 狐狸,豪猪,负鼠,浣熊,臭鼬
非昆虫无脊椎动物 螃蟹,龙虾,蜗牛,蜘蛛,蠕虫
宝贝,男孩,女孩,男人,女人
爬行动物 鳄鱼,恐龙,蜥蜴,蛇,乌龟
小型哺乳动物 仓鼠,老鼠,兔子,母老虎,松鼠
树木 枫树,橡树,棕榈,松树,柳树
车辆1 自行车,公共汽车,摩托车,皮卡车,火车
车辆2 割草机,火箭,有轨电车,坦克,拖拉机
Superclass Classes
aquatic mammals beaver, dolphin, otter, seal, whale
fish aquarium fish, flatfish, ray, shark, trout
flowers orchids, poppies, roses, sunflowers, tulips
food containers bottles, bowls, cans, cups, plates
fruit and vegetables apples, mushrooms, oranges, pears, sweet peppers
household electrical devices clock, computer keyboard, lamp, telephone, television
household furniture bed, chair, couch, table, wardrobe
insects bee, beetle, butterfly, caterpillar, cockroach
large carnivores bear, leopard, lion, tiger, wolf
large man-made outdoor things bridge, castle, house, road, skyscraper
large natural outdoor scenes cloud, forest, mountain, plain, sea
large omnivores and herbivores camel, cattle, chimpanzee, elephant, kangaroo
medium-sized mammals fox, porcupine, possum, raccoon, skunk
non-insect invertebrates crab, lobster, snail, spider, worm
people baby, boy, girl, man, woman
reptiles crocodile, dinosaur, lizard, snake, turtle
small mammals hamster, mouse, rabbit, shrew, squirrel
trees maple, oak, palm, pine, willow
vehicles 1 bicycle, bus, motorcycle, pickup truck, train
vehicles 2 lawn-mower, rocket, streetcar, tank, tractor
In [1]:
import pandas as pd import numpy as np import tensorflow as tf import matplotlib.pyplot as plt 

1、读取数据集

直接从tensorflow的dataset来读取数据集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar100.load_data() print(train_x.shape, train_y.shape) 
(50000, 32, 32, 3) (50000, 1)

这是32*32的彩色图,rgb三个通道如何处理呢

In [3]:
plt.imshow(train_x[0]) plt.show() 
In [4]:
plt.figure() plt.imshow(train_x[1]) plt.figure() plt.imshow(train_x[2]) plt.show() 
In [5]:
print(test_y) 
[[49]
 [33]
 [72]
 ...
 [51]
 [42]
 [70]]
In [6]:
# 像素值 RGB
np.max(train_x[0]) 
Out[6]:
255

2、拆分数据集(拆分成训练数据集和测试数据集)

上一步做了拆分数据集的工作

In [7]:
# 图片数据如何归一化
# 直接除255即可 train_x = train_x/255 test_x = test_x/255 
In [8]:
# 像素值 RGB
np.max(train_x[0]) 
Out[8]:
1.0
In [9]:
train_y=train_y.flatten() test_y=test_y.flatten() train_y = tf.one_hot(train_y, depth=100) test_y = tf.one_hot(test_y, depth=100) print(test_y.shape) 
(10000, 100)

3、构建模型

应该构建一个怎么样的模型:

输入是32*32*3维,输出是一个label,是一个10分类问题,

需要one_hot编码么,如果是one_hot编码,那么输出是10维

也就是 32*32*3->n->10,可以试下3072->1024->512->256->128->10

In [10]:
# 构建容器
model = tf.keras.Sequential() # 输入层 # 将多维数据(60000, 32, 32, 3)变成一维 # 把图像扁平化成一个向量 model.add(tf.keras.layers.Flatten(input_shape=(32,32,3))) # 中间层 model.add(tf.keras.layers.Dense(1024,activation='relu')) model.add(tf.keras.layers.Dense(512,activation='relu')) model.add(tf.keras.layers.Dense(256,activation='relu')) model.add(tf.keras.layers.Dense(128,activation='relu')) # 输出层 model.add(tf.keras.layers.Dense(100,activation='softmax')) # 模型的结构 model.summary() 
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 3072)              0         
_________________________________________________________________
dense (Dense)                (None, 1024)              3146752   
_________________________________________________________________
dense_1 (Dense)              (None, 512)               524800    
_________________________________________________________________
dense_2 (Dense)              (None, 256)               131328    
_________________________________________________________________
dense_3 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_4 (Dense)              (None, 100)               12900     
=================================================================
Total params: 3,848,676
Trainable params: 3,848,676
Non-trainable params: 0
_________________________________________________________________

太玄学了,增加层(比如在128和10之间增加32)并不能使准确率增加

4、训练模型

In [11]:
# 配置优化函数和损失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc']) # 开始训练 history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y)) 
Epoch 1/50
1563/1563 [==============================] - 8s 5ms/step - loss: 4.1911 - acc: 0.0530 - val_loss: 3.9369 - val_acc: 0.0906
Epoch 2/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.8389 - acc: 0.1020 - val_loss: 3.7550 - val_acc: 0.1150
Epoch 3/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.6482 - acc: 0.1374 - val_loss: 3.5906 - val_acc: 0.1482
Epoch 4/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.5238 - acc: 0.1581 - val_loss: 3.5512 - val_acc: 0.1663
Epoch 5/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.4383 - acc: 0.1739 - val_loss: 3.4744 - val_acc: 0.1747
Epoch 6/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.3744 - acc: 0.1844 - val_loss: 3.4832 - val_acc: 0.1744
Epoch 7/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.3140 - acc: 0.1949 - val_loss: 3.4285 - val_acc: 0.1791
Epoch 8/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.2697 - acc: 0.2050 - val_loss: 3.4093 - val_acc: 0.1893
Epoch 9/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.2257 - acc: 0.2105 - val_loss: 3.4472 - val_acc: 0.1818
Epoch 10/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1836 - acc: 0.2178 - val_loss: 3.4151 - val_acc: 0.1963
Epoch 11/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1515 - acc: 0.2243 - val_loss: 3.3867 - val_acc: 0.1978
Epoch 12/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1104 - acc: 0.2314 - val_loss: 3.4266 - val_acc: 0.1972
Epoch 13/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0777 - acc: 0.2369 - val_loss: 3.4181 - val_acc: 0.2014
Epoch 14/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0521 - acc: 0.2422 - val_loss: 3.4320 - val_acc: 0.2001
Epoch 15/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0246 - acc: 0.2464 - val_loss: 3.5107 - val_acc: 0.1892
Epoch 16/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9955 - acc: 0.2531 - val_loss: 3.3983 - val_acc: 0.2133
Epoch 17/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9644 - acc: 0.2581 - val_loss: 3.4868 - val_acc: 0.1997
Epoch 18/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9367 - acc: 0.2622 - val_loss: 3.4433 - val_acc: 0.2090
Epoch 19/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9088 - acc: 0.2675 - val_loss: 3.4769 - val_acc: 0.2041
Epoch 20/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8881 - acc: 0.2698 - val_loss: 3.5843 - val_acc: 0.1935
Epoch 21/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8584 - acc: 0.2801 - val_loss: 3.4979 - val_acc: 0.2105
Epoch 22/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8405 - acc: 0.2812 - val_loss: 3.5163 - val_acc: 0.2085
Epoch 23/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8147 - acc: 0.2871 - val_loss: 3.6058 - val_acc: 0.2061
Epoch 24/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7974 - acc: 0.2913 - val_loss: 3.5679 - val_acc: 0.2060
Epoch 25/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7729 - acc: 0.2948 - val_loss: 3.5804 - val_acc: 0.2073
Epoch 26/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7558 - acc: 0.2960 - val_loss: 3.5837 - val_acc: 0.2091
Epoch 27/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7289 - acc: 0.3037 - val_loss: 3.7283 - val_acc: 0.1973
Epoch 28/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7123 - acc: 0.3052 - val_loss: 3.6379 - val_acc: 0.2017
Epoch 29/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6962 - acc: 0.3084 - val_loss: 3.7487 - val_acc: 0.1953
Epoch 30/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6823 - acc: 0.3162 - val_loss: 3.7594 - val_acc: 0.1972
Epoch 31/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6641 - acc: 0.3151 - val_loss: 3.7102 - val_acc: 0.2055
Epoch 32/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6385 - acc: 0.3220 - val_loss: 3.8158 - val_acc: 0.2010
Epoch 33/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6203 - acc: 0.3252 - val_loss: 3.8426 - val_acc: 0.2002
Epoch 34/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6031 - acc: 0.3311 - val_loss: 3.7780 - val_acc: 0.1999
Epoch 35/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5957 - acc: 0.3317 - val_loss: 3.9130 - val_acc: 0.1952
Epoch 36/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5716 - acc: 0.3388 - val_loss: 3.9938 - val_acc: 0.1987
Epoch 37/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5627 - acc: 0.3393 - val_loss: 3.9578 - val_acc: 0.1998
Epoch 38/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5383 - acc: 0.3411 - val_loss: 3.9641 - val_acc: 0.2031
Epoch 39/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5338 - acc: 0.3463 - val_loss: 3.9104 - val_acc: 0.2030
Epoch 40/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5247 - acc: 0.3471 - val_loss: 4.0854 - val_acc: 0.1999
Epoch 41/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5074 - acc: 0.3519 - val_loss: 4.1345 - val_acc: 0.1980
Epoch 42/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4867 - acc: 0.3541 - val_loss: 4.1529 - val_acc: 0.2006
Epoch 43/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4853 - acc: 0.3570 - val_loss: 4.1271 - val_acc: 0.1992
Epoch 44/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4717 - acc: 0.3585 - val_loss: 4.1661 - val_acc: 0.2003
Epoch 45/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4626 - acc: 0.3632 - val_loss: 4.2586 - val_acc: 0.1908
Epoch 46/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4456 - acc: 0.3648 - val_loss: 4.2223 - val_acc: 0.2022
Epoch 47/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4449 - acc: 0.3650 - val_loss: 4.1411 - val_acc: 0.1996
Epoch 48/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4227 - acc: 0.3688 - val_loss: 4.4417 - val_acc: 0.1952
Epoch 49/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4129 - acc: 0.3724 - val_loss: 4.2390 - val_acc: 0.1970
Epoch 50/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4045 - acc: 0.3728 - val_loss: 4.3706 - val_acc: 0.1906
In [12]:
plt.plot(history.epoch,history.history.get('loss')) plt.title("train data loss") plt.show() 
In [13]:
plt.plot(history.epoch,history.history.get('val_loss')) plt.title("test data loss") plt.show() 
In [14]:
plt.plot(history.epoch,history.history.get('acc')) plt.title("train data acc") plt.show() 
In [15]:
plt.plot(history.epoch,history.history.get('val_acc')) plt.title("test data acc") plt.show() 

5、检验模型

In [16]:
# 看一下模型的预测能力
pridict_y=model.predict(test_x) print(pridict_y) print(test_y) 
[[5.87555907e-11 6.25903931e-06 3.10674729e-03 ... 1.29327574e-03
  1.23159494e-04 1.03848870e-03]
 [1.42925246e-05 1.02322013e-03 3.07974895e-03 ... 8.73711240e-03
  1.40226888e-03 4.68649762e-03]
 [4.72828424e-06 5.80412745e-07 3.15064029e-03 ... 1.74543326e-04
  2.82751564e-02 3.59415344e-06]
 ...
 [5.58595697e-04 1.69459265e-04 1.53394813e-18 ... 9.26301080e-09
  5.34058708e-09 1.92464329e-04]
 [8.94686746e-05 1.84288583e-04 3.36396275e-03 ... 2.85315141e-02
  3.48905521e-03 1.34982215e-02]
 [2.32549205e-01 1.74601786e-02 1.74095971e-04 ... 1.05859058e-07
  4.64854483e-03 5.48056385e-04]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 100), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,横向
pridict_y = tf.argmax(pridict_y, axis=1) print(pridict_y) # test_y = tf.argmax(test_y, axis=1) print(test_y) 
tf.Tensor([71 78 42 ... 51 88  0], shape=(10000,), dtype=int64)
tf.Tensor([49 33 72 ... 51 42 70], shape=(10000,), dtype=int64)
In [18]:
plt.figure() plt.imshow(test_x[0]) plt.figure() plt.imshow(test_x[1]) plt.figure() plt.imshow(test_x[2]) plt.figure() plt.imshow(test_x[3]) plt.show() 
In [ ]:
 

 

 

 

 


免责声明!

本站转载的文章为个人学习借鉴使用,本站对版权不负任何法律责任。如果侵犯了您的隐私权益,请联系本站邮箱yoyou2525@163.com删除。



 
粤ICP备18138465号  © 2018-2025 CODEPRJ.COM