Tensorflow2(預課程)---8.1、cifar100分類-層方式


Tensorflow2(預課程)---8.1、cifar100分類-層方式

一、總結

一句話總結:

全連接神經網絡做cifar100分類不行,簡單測試一下,准確率才20%,需要換別的神經網絡

 

 

二、cifar100分類-層方式

博客對應課程的視頻位置:

 

步驟

1、讀取數據集
2、拆分數據集(拆分成訓練數據集和測試數據集)
3、構建模型
4、訓練模型
5、檢驗模型

需求

cifar100(物品分類)


cifar100這個數據集就像CIFAR-10,除了它有100個類,每個類包含600個圖像。,每類各有500個訓練圖像和100個測試圖像。CIFAR-100中的100個類被分成20個超類。每個圖像都帶有一個“精細”標簽(它所屬的類)和一個“粗糙”標簽(它所屬的超類) 以下是CIFAR-100中的類別列表:

超類 類別
水生哺乳動物 海狸,海豚,水獺,海豹,鯨魚
水族館的魚,比目魚,射線,鯊魚,鱒魚
花卉 蘭花,罌粟花,玫瑰,向日葵,郁金香
食品容器 瓶子,碗,罐子,杯子,盤子
水果和蔬菜 蘋果,蘑菇,橘子,梨,甜椒
家用電器 時鍾,電腦鍵盤,台燈,電話機,電視機
家用家具 床,椅子,沙發,桌子,衣櫃
昆蟲 蜜蜂,甲蟲,蝴蝶,毛蟲,蟑螂
大型食肉動物 熊,豹,獅子,老虎,狼
大型人造戶外用品 橋,城堡,房子,路,摩天大樓
大自然的戶外場景 雲,森林,山,平原,海
大雜食動物和食草動物 駱駝,牛,黑猩猩,大象,袋鼠
中型哺乳動物 狐狸,豪豬,負鼠,浣熊,臭鼬
非昆蟲無脊椎動物 螃蟹,龍蝦,蝸牛,蜘蛛,蠕蟲
寶貝,男孩,女孩,男人,女人
爬行動物 鱷魚,恐龍,蜥蜴,蛇,烏龜
小型哺乳動物 倉鼠,老鼠,兔子,母老虎,松鼠
樹木 楓樹,橡樹,棕櫚,松樹,柳樹
車輛1 自行車,公共汽車,摩托車,皮卡車,火車
車輛2 割草機,火箭,有軌電車,坦克,拖拉機
Superclass Classes
aquatic mammals beaver, dolphin, otter, seal, whale
fish aquarium fish, flatfish, ray, shark, trout
flowers orchids, poppies, roses, sunflowers, tulips
food containers bottles, bowls, cans, cups, plates
fruit and vegetables apples, mushrooms, oranges, pears, sweet peppers
household electrical devices clock, computer keyboard, lamp, telephone, television
household furniture bed, chair, couch, table, wardrobe
insects bee, beetle, butterfly, caterpillar, cockroach
large carnivores bear, leopard, lion, tiger, wolf
large man-made outdoor things bridge, castle, house, road, skyscraper
large natural outdoor scenes cloud, forest, mountain, plain, sea
large omnivores and herbivores camel, cattle, chimpanzee, elephant, kangaroo
medium-sized mammals fox, porcupine, possum, raccoon, skunk
non-insect invertebrates crab, lobster, snail, spider, worm
people baby, boy, girl, man, woman
reptiles crocodile, dinosaur, lizard, snake, turtle
small mammals hamster, mouse, rabbit, shrew, squirrel
trees maple, oak, palm, pine, willow
vehicles 1 bicycle, bus, motorcycle, pickup truck, train
vehicles 2 lawn-mower, rocket, streetcar, tank, tractor
In [1]:
import pandas as pd import numpy as np import tensorflow as tf import matplotlib.pyplot as plt 

1、讀取數據集

直接從tensorflow的dataset來讀取數據集即可

In [2]:
 (train_x, train_y), (test_x, test_y) = tf.keras.datasets.cifar100.load_data() print(train_x.shape, train_y.shape) 
(50000, 32, 32, 3) (50000, 1)

這是32*32的彩色圖,rgb三個通道如何處理呢

In [3]:
plt.imshow(train_x[0]) plt.show() 
In [4]:
plt.figure() plt.imshow(train_x[1]) plt.figure() plt.imshow(train_x[2]) plt.show() 
In [5]:
print(test_y) 
[[49]
 [33]
 [72]
 ...
 [51]
 [42]
 [70]]
In [6]:
# 像素值 RGB
np.max(train_x[0]) 
Out[6]:
255

2、拆分數據集(拆分成訓練數據集和測試數據集)

上一步做了拆分數據集的工作

In [7]:
# 圖片數據如何歸一化
# 直接除255即可 train_x = train_x/255 test_x = test_x/255 
In [8]:
# 像素值 RGB
np.max(train_x[0]) 
Out[8]:
1.0
In [9]:
train_y=train_y.flatten() test_y=test_y.flatten() train_y = tf.one_hot(train_y, depth=100) test_y = tf.one_hot(test_y, depth=100) print(test_y.shape) 
(10000, 100)

3、構建模型

應該構建一個怎么樣的模型:

輸入是32*32*3維,輸出是一個label,是一個10分類問題,

需要one_hot編碼么,如果是one_hot編碼,那么輸出是10維

也就是 32*32*3->n->10,可以試下3072->1024->512->256->128->10

In [10]:
# 構建容器
model = tf.keras.Sequential() # 輸入層 # 將多維數據(60000, 32, 32, 3)變成一維 # 把圖像扁平化成一個向量 model.add(tf.keras.layers.Flatten(input_shape=(32,32,3))) # 中間層 model.add(tf.keras.layers.Dense(1024,activation='relu')) model.add(tf.keras.layers.Dense(512,activation='relu')) model.add(tf.keras.layers.Dense(256,activation='relu')) model.add(tf.keras.layers.Dense(128,activation='relu')) # 輸出層 model.add(tf.keras.layers.Dense(100,activation='softmax')) # 模型的結構 model.summary() 
Model: "sequential"
_________________________________________________________________
Layer (type)                 Output Shape              Param #   
=================================================================
flatten (Flatten)            (None, 3072)              0         
_________________________________________________________________
dense (Dense)                (None, 1024)              3146752   
_________________________________________________________________
dense_1 (Dense)              (None, 512)               524800    
_________________________________________________________________
dense_2 (Dense)              (None, 256)               131328    
_________________________________________________________________
dense_3 (Dense)              (None, 128)               32896     
_________________________________________________________________
dense_4 (Dense)              (None, 100)               12900     
=================================================================
Total params: 3,848,676
Trainable params: 3,848,676
Non-trainable params: 0
_________________________________________________________________

太玄學了,增加層(比如在128和10之間增加32)並不能使准確率增加

4、訓練模型

In [11]:
# 配置優化函數和損失器
model.compile(optimizer='adam',loss='categorical_crossentropy',metrics=['acc']) # 開始訓練 history = model.fit(train_x,train_y,epochs=50,validation_data=(test_x,test_y)) 
Epoch 1/50
1563/1563 [==============================] - 8s 5ms/step - loss: 4.1911 - acc: 0.0530 - val_loss: 3.9369 - val_acc: 0.0906
Epoch 2/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.8389 - acc: 0.1020 - val_loss: 3.7550 - val_acc: 0.1150
Epoch 3/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.6482 - acc: 0.1374 - val_loss: 3.5906 - val_acc: 0.1482
Epoch 4/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.5238 - acc: 0.1581 - val_loss: 3.5512 - val_acc: 0.1663
Epoch 5/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.4383 - acc: 0.1739 - val_loss: 3.4744 - val_acc: 0.1747
Epoch 6/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.3744 - acc: 0.1844 - val_loss: 3.4832 - val_acc: 0.1744
Epoch 7/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.3140 - acc: 0.1949 - val_loss: 3.4285 - val_acc: 0.1791
Epoch 8/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.2697 - acc: 0.2050 - val_loss: 3.4093 - val_acc: 0.1893
Epoch 9/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.2257 - acc: 0.2105 - val_loss: 3.4472 - val_acc: 0.1818
Epoch 10/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1836 - acc: 0.2178 - val_loss: 3.4151 - val_acc: 0.1963
Epoch 11/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1515 - acc: 0.2243 - val_loss: 3.3867 - val_acc: 0.1978
Epoch 12/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.1104 - acc: 0.2314 - val_loss: 3.4266 - val_acc: 0.1972
Epoch 13/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0777 - acc: 0.2369 - val_loss: 3.4181 - val_acc: 0.2014
Epoch 14/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0521 - acc: 0.2422 - val_loss: 3.4320 - val_acc: 0.2001
Epoch 15/50
1563/1563 [==============================] - 8s 5ms/step - loss: 3.0246 - acc: 0.2464 - val_loss: 3.5107 - val_acc: 0.1892
Epoch 16/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9955 - acc: 0.2531 - val_loss: 3.3983 - val_acc: 0.2133
Epoch 17/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9644 - acc: 0.2581 - val_loss: 3.4868 - val_acc: 0.1997
Epoch 18/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9367 - acc: 0.2622 - val_loss: 3.4433 - val_acc: 0.2090
Epoch 19/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.9088 - acc: 0.2675 - val_loss: 3.4769 - val_acc: 0.2041
Epoch 20/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8881 - acc: 0.2698 - val_loss: 3.5843 - val_acc: 0.1935
Epoch 21/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8584 - acc: 0.2801 - val_loss: 3.4979 - val_acc: 0.2105
Epoch 22/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8405 - acc: 0.2812 - val_loss: 3.5163 - val_acc: 0.2085
Epoch 23/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.8147 - acc: 0.2871 - val_loss: 3.6058 - val_acc: 0.2061
Epoch 24/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7974 - acc: 0.2913 - val_loss: 3.5679 - val_acc: 0.2060
Epoch 25/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7729 - acc: 0.2948 - val_loss: 3.5804 - val_acc: 0.2073
Epoch 26/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7558 - acc: 0.2960 - val_loss: 3.5837 - val_acc: 0.2091
Epoch 27/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7289 - acc: 0.3037 - val_loss: 3.7283 - val_acc: 0.1973
Epoch 28/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.7123 - acc: 0.3052 - val_loss: 3.6379 - val_acc: 0.2017
Epoch 29/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6962 - acc: 0.3084 - val_loss: 3.7487 - val_acc: 0.1953
Epoch 30/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6823 - acc: 0.3162 - val_loss: 3.7594 - val_acc: 0.1972
Epoch 31/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6641 - acc: 0.3151 - val_loss: 3.7102 - val_acc: 0.2055
Epoch 32/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6385 - acc: 0.3220 - val_loss: 3.8158 - val_acc: 0.2010
Epoch 33/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6203 - acc: 0.3252 - val_loss: 3.8426 - val_acc: 0.2002
Epoch 34/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.6031 - acc: 0.3311 - val_loss: 3.7780 - val_acc: 0.1999
Epoch 35/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5957 - acc: 0.3317 - val_loss: 3.9130 - val_acc: 0.1952
Epoch 36/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5716 - acc: 0.3388 - val_loss: 3.9938 - val_acc: 0.1987
Epoch 37/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5627 - acc: 0.3393 - val_loss: 3.9578 - val_acc: 0.1998
Epoch 38/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5383 - acc: 0.3411 - val_loss: 3.9641 - val_acc: 0.2031
Epoch 39/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5338 - acc: 0.3463 - val_loss: 3.9104 - val_acc: 0.2030
Epoch 40/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5247 - acc: 0.3471 - val_loss: 4.0854 - val_acc: 0.1999
Epoch 41/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.5074 - acc: 0.3519 - val_loss: 4.1345 - val_acc: 0.1980
Epoch 42/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4867 - acc: 0.3541 - val_loss: 4.1529 - val_acc: 0.2006
Epoch 43/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4853 - acc: 0.3570 - val_loss: 4.1271 - val_acc: 0.1992
Epoch 44/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4717 - acc: 0.3585 - val_loss: 4.1661 - val_acc: 0.2003
Epoch 45/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4626 - acc: 0.3632 - val_loss: 4.2586 - val_acc: 0.1908
Epoch 46/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4456 - acc: 0.3648 - val_loss: 4.2223 - val_acc: 0.2022
Epoch 47/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4449 - acc: 0.3650 - val_loss: 4.1411 - val_acc: 0.1996
Epoch 48/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4227 - acc: 0.3688 - val_loss: 4.4417 - val_acc: 0.1952
Epoch 49/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4129 - acc: 0.3724 - val_loss: 4.2390 - val_acc: 0.1970
Epoch 50/50
1563/1563 [==============================] - 8s 5ms/step - loss: 2.4045 - acc: 0.3728 - val_loss: 4.3706 - val_acc: 0.1906
In [12]:
plt.plot(history.epoch,history.history.get('loss')) plt.title("train data loss") plt.show() 
In [13]:
plt.plot(history.epoch,history.history.get('val_loss')) plt.title("test data loss") plt.show() 
In [14]:
plt.plot(history.epoch,history.history.get('acc')) plt.title("train data acc") plt.show() 
In [15]:
plt.plot(history.epoch,history.history.get('val_acc')) plt.title("test data acc") plt.show() 

5、檢驗模型

In [16]:
# 看一下模型的預測能力
pridict_y=model.predict(test_x) print(pridict_y) print(test_y) 
[[5.87555907e-11 6.25903931e-06 3.10674729e-03 ... 1.29327574e-03
  1.23159494e-04 1.03848870e-03]
 [1.42925246e-05 1.02322013e-03 3.07974895e-03 ... 8.73711240e-03
  1.40226888e-03 4.68649762e-03]
 [4.72828424e-06 5.80412745e-07 3.15064029e-03 ... 1.74543326e-04
  2.82751564e-02 3.59415344e-06]
 ...
 [5.58595697e-04 1.69459265e-04 1.53394813e-18 ... 9.26301080e-09
  5.34058708e-09 1.92464329e-04]
 [8.94686746e-05 1.84288583e-04 3.36396275e-03 ... 2.85315141e-02
  3.48905521e-03 1.34982215e-02]
 [2.32549205e-01 1.74601786e-02 1.74095971e-04 ... 1.05859058e-07
  4.64854483e-03 5.48056385e-04]]
tf.Tensor(
[[0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 ...
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]
 [0. 0. 0. ... 0. 0. 0.]], shape=(10000, 100), dtype=float32)
In [17]:
# 在pridict_y中找最大值的索引,橫向
pridict_y = tf.argmax(pridict_y, axis=1) print(pridict_y) # test_y = tf.argmax(test_y, axis=1) print(test_y) 
tf.Tensor([71 78 42 ... 51 88  0], shape=(10000,), dtype=int64)
tf.Tensor([49 33 72 ... 51 42 70], shape=(10000,), dtype=int64)
In [18]:
plt.figure() plt.imshow(test_x[0]) plt.figure() plt.imshow(test_x[1]) plt.figure() plt.imshow(test_x[2]) plt.figure() plt.imshow(test_x[3]) plt.show() 
In [ ]:
 

 

 

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM