使用caffe主要分為三大步:
【1】用convert_imageset.exe把圖片數據庫轉換為.lmdb或者.leveldb的格式。
【2】用compute_image_mean.exe進行取均值的預處理,生成.binaryproto文件
【3】用caffe.exe跑CNN。
1)數據准備
下載的一個比較小的ImageNet圖片數據集,共120種,每種不到200張。
2)生成train.txt文件
對於train.txt文件的格式,網上有明確的介紹。
來自:http://blog.csdn.net/u012878523/article/details/41698209
是這樣的格式:
我自己寫了一個matlab的小程序,直接生成train.txt文件:
clear all clc foodDir='E:\000Deep Learning000\caffe-windows-3rdparty20151001\data\train_data_v2'; numClasses=10; classes=dir(foodDir); classes = classes([classes.isdir]) ; classes = {classes(3:numClasses+2).name}; imageName={}; fp = fopen('train.txt','a'); for ci = 1:length(classes) ims = dir(fullfile(foodDir, classes{ci}, '*.jpg'))' ; for ii=1:length(ims) fprintf(fp,classes{ci}); fprintf(fp,'/'); fprintf(fp,ims(ii).name); fprintf(fp,' '); fprintf(fp,'%d',ci); fprintf(fp,'\r\n'); end end fclose(fp);
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
下面開始使用caffe:
【1】用convert_imageset.exe把圖片數據庫轉換為.lmdb或者.leveldb的格式。
網上流傳的大多是Linux的shell命令,我仿着caffe自帶的example里面的imagenet的shell文件寫了一個批處理命令,可以直接用的。
.\bin\convert_imageset.exe --resize_height=256 --resize_width=256 --shuffle --backend="leveldb" D:\000\caffe-windows-3rdparty20151001\data\train_data_v2\ D:\000\caffe-windows-3rdparty20151001\data\train.txt D:\000\caffe-windows-3rdparty20151001\examples\imagenet\ilsvrc12_train_new2_lmdb_lmdb_lmdb_lmdb
- 1
- 2
- 3
注意這里的backend用是leveldb,默認的是lmdb。
如果這里生成的是leveldb文件,后面預處理 計算均值圖像的時候也要用leveldb。我一開始生成的是lmdb文件,結果后面運行compute_image_mean的時候報錯:
set end of file error
后來改成leveldb,一切正常。
這是lmdb
這是leveldb
跑出來的結果是這樣的:
【2】用compute_image_mean.exe進行取均值的預處理,生成.binaryproto文件
.\bin\compute_image_mean.exe --backend="leveldb" D:\000\caffe-windows-3rdparty20151001\examples\imagenet\ilsvrc12_train_lmdb D:\000\caffe-windows-3rdparty20151001\examples\imagenet\mean.binaryproto pause
- 1
- 2
- 3
跑出來的結果是這樣的:
【3】用caffe.exe跑CNN
先看看caffe.exe 的help
C:\Users\connor>D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe -help D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe: command line brew usage: caffe <command> <args> commands: train train or finetune a model test score a model device_query show GPU diagnostic information time benchmark model execution time Flags from ..\..\src\gflags.cc: --flagfile (load flags from file) type: string default: "" --fromenv (set flags from the environment [use 'export FLAGS_flag1=value']) type: string default: "" --tryfromenv (set flags from the environment if present) type: string default: "" --undefok (comma-separated list of flag names that it is okay to specify on the command line even if the program does not define a flag with that name. IMPORTANT: flags in this list that have arguments MUST use the flag=value format) type: string default: "" Flags from ..\..\src\gflags_completions.cc: --tab_completion_columns (Number of columns to use in output for tab completion) type: int32 default: 80 --tab_completion_word (If non-empty, HandleCommandLineCompletions() will hijack the process and attempt to do bash-style command line flag completion on this value.) type: string default: "" Flags from ..\..\src\gflags_reporting.cc: --help (show help on all flags [tip: all flags can have two dashes]) type: bool default: false currently: true --helpfull (show help on all flags -- same as -help) type: bool default: false --helpmatch (show help on modules whose name contains the specified substr) type: string default: "" --helpon (show help on the modules named by this flag value) type: string default: "" --helppackage (show help on all modules in the main package) type: bool default: false --helpshort (show help on only the main module for this program) type: bool default: false --helpxml (produce an xml version of help) type: bool default: false --version (show version and build info and exit) type: bool default: false Flags from ..\..\tools\caffe.cpp: --gpu (Optional; run in GPU mode on given device IDs separated by ','.Use '-gpu all' to run on all available GPUs. The effective training batch size is multiplied by the number of devices.) type: string default: "" --iterations (The number of iterations to run.) type: int32 default: 50 --model (The model definition protocol buffer text file..) type: string default: "" --sighup_effect (Optional; action to take when a SIGHUP signal is received: snapshot, stop or none.) type: string default: "snapshot" --sigint_effect (Optional; action to take when a SIGINT signal is received: snapshot, stop or none.) type: string default: "stop" --snapshot (Optional; the snapshot solver state to resume training.) type: string default: "" --solver (The solver definition protocol buffer text file.) type: string default: "" --weights (Optional; the pretrained weights to initialize finetuning, separated by ','. Cannot be set simultaneously with snapshot.) type: string default: "" C:\Users\connor>
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
- 39
- 40
- 41
- 42
- 43
- 44
- 45
- 46
- 47
- 48
- 49
- 50
- 51
- 52
- 53
- 54
- 55
- 56
- 57
- 58
- 59
- 60
- 61
- 62
- 63
- 64
- 65
- 66
- 67
- 68
- 69
- 70
- 71
- 72
- 73
- 74
有兩個主要的參數:
solver
和
snapshot
solver是指向solver.prototxt配置文件的。
snapshot是將屏幕上輸出的東西寫進一個txt文件里。
下面看prototxt文件里的內容,在 E:\000Deep Learning000\caffe-windows-3rdparty20151001\models\bvlc_alexnet 里.
net: "models/bvlc_alexnet/train_val.prototxt" test_iter: 1000 test_interval: 1000 base_lr: 0.01 lr_policy: "step" gamma: 0.1 stepsize: 100000 display: 20 max_iter: 450000 momentum: 0.9 weight_decay: 0.0005 snapshot: 10000 snapshot_prefix: "models/bvlc_alexnet/caffe_alexnet_train" solver_mode: GPU
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
下一部分引自caffe下自己的數據訓練和測試
我們還有一個運行的協議solver.prototxt,復制過來,將第一行路徑改為我們的路徑net: “examples/myself/train_val.prototxt”,從里面可以觀察到,我們將運行256批次,迭代4500000次(90期),每1000次迭代,我們測試學習網絡驗證數據,我們設置初始的學習率為0.01,每100000(20期)次迭代減少學習率,顯示一次信息,訓練的weight_decay為0.0005,每10000次迭代,我們顯示一下當前狀態。
以上是教程的,實際上,以上需要耗費很長時間,因此,我們稍微改一下
test_iter: 1000是指測試的批次,我們就10張照片,設置10就可以了。
test_interval: 1000是指每1000次迭代測試一次,我們改成500次測試一次。
base_lr: 0.01是基礎學習率,因為數據量小,0.01就會下降太快了,因此改成0.001
lr_policy: “step”學習率變化
gamma: 0.1學習率變化的比率
stepsize: 100000每100000次迭代減少學習率
display: 20每20層顯示一次
max_iter: 450000最大迭代次數,
momentum: 0.9學習的參數,不用變
weight_decay: 0.0005學習的參數,不用變
snapshot: 10000每迭代10000次顯示狀態,這里改為2000次
solver_mode: GPU末尾加一行,代表用GPU進行
打開 models/bvlc_alexnet/train_val.prototxt 看看
先只看數據層:
layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TRAIN } transform_param { mirror: true crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } data_param { source: "examples/imagenet/ilsvrc12_train_lmdb" batch_size: 256 backend: leveldb } } layer { name: "data" type: "Data" top: "data" top: "label" include { phase: TEST } transform_param { mirror: false crop_size: 227 mean_file: "data/ilsvrc12/imagenet_mean.binaryproto" } data_param { source: "examples/imagenet/ilsvrc12_val_lmdb" batch_size: 50 backend: leveldb } }
- 1
- 2
- 3
- 4
- 5
- 6
- 7
- 8
- 9
- 10
- 11
- 12
- 13
- 14
- 15
- 16
- 17
- 18
- 19
- 20
- 21
- 22
- 23
- 24
- 25
- 26
- 27
- 28
- 29
- 30
- 31
- 32
- 33
- 34
- 35
- 36
- 37
- 38
這里backend: LMDB要改成backend: LEVELDB,注意要全部大寫,不然會報錯。
下面就可以直接運行caffe.exe跑CNN了,cmd命令如下:
D:\000\caffe-windows-3rdparty20151001\bin\caffe.exe train --solver=models\bvlc_alexnet\solver.prototxt
- 1
本文實驗過程中承蒙實驗室孫滿利師兄指導,撒花感謝~
天津大學電子信息工程學院
視覺模式分析實驗室
修宇璇
版權所有,轉載請注明出處