caffe之(一)卷積層


在caffe中,網絡的結構由prototxt文件中給出,由一些列的Layer(層)組成,常用的層如:數據加載層、卷積操作層、pooling層、非線性變換層、內積運算層、歸一化層、損失計算層等;本篇主要介紹卷積層

參考

1. 卷積層總述

下面首先給出卷積層的結構設置的一個小例子(定義在.prototxt文件中) 

layer {

  name: "conv1" // 該層的名字
  type: "Convolution" // 該層的類型,具體地,可選的類型有:Convolution、
  bottom: "data" // 該層的輸入數據Blob的名字
  top: "conv1" // 該層的輸出數據Blob的名字

  // 該層的權值和偏置相關參數
  param { 
    lr_mult: 1  //weight的學習率
  }
  param {
    lr_mult: 2  // bias的學習率
  }

  // 該層(卷積層)的卷積運算相關的參數
  convolution_param {
    num_output: 20
    kernel_size: 5
    stride: 1
    weight_filler {
      type: "xavier"  // weights初始化方法
    }
    bias_filler {
      type: "constant" // bias初始化方法
    }
  }

}

 

注:在caffe的原始proto文件中,關於卷積層的參數ConvolutionPraram定義如下:

message ConvolutionParameter {
  optional uint32 num_output = 1; // The number of outputs for the layer
  optional bool bias_term = 2 [default = true]; // whether to have bias terms

  // Pad, kernel size, and stride are all given as a single value for equal dimensions in all spatial dimensions, or once per spatial dimension.
  repeated uint32 pad = 3; // The padding size; defaults to 0
  repeated uint32 kernel_size = 4; // The kernel size
  repeated uint32 stride = 6; // The stride; defaults to 1
  // Factor used to dilate the kernel, (implicitly) zero-filling the resulting holes. (Kernel dilation is sometimes referred to by its use in the algorithme à trous from Holschneider et al. 1987.)
  repeated uint32 dilation = 18; // The dilation; defaults to 1

  // For 2D convolution only, the *_h and *_w versions may also be used to specify both spatial dimensions.
  optional uint32 pad_h = 9 [default = 0]; // The padding height (2D only)
  optional uint32 pad_w = 10 [default = 0]; // The padding width (2D only)
  optional uint32 kernel_h = 11; // The kernel height (2D only)
  optional uint32 kernel_w = 12; // The kernel width (2D only)
  optional uint32 stride_h = 13; // The stride height (2D only)
  optional uint32 stride_w = 14; // The stride width (2D only)

  optional uint32 group = 5 [default = 1]; // The group size for group conv

  optional FillerParameter weight_filler = 7; // The filler for the weight
  optional FillerParameter bias_filler = 8; // The filler for the bias
  enum Engine {
    DEFAULT = 0;
    CAFFE = 1;
    CUDNN = 2;
  }
  optional Engine engine = 15 [default = DEFAULT];

  // The axis to interpret as "channels" when performing convolution.
  // Preceding dimensions are treated as independent inputs;
  // succeeding dimensions are treated as "spatial".
  // With (N, C, H, W) inputs, and axis == 1 (the default), we perform
  // N independent 2D convolutions, sliding C-channel (or (C/g)-channels, for
  // groups g>1) filters across the spatial axes (H, W) of the input.
  // With (N, C, D, H, W) inputs, and axis == 1, we perform
  // N independent 3D convolutions, sliding (C/g)-channels
  // filters across the spatial axes (D, H, W) of the input.
  optional int32 axis = 16 [default = 1];

  // Whether to force use of the general ND convolution, even if a specific
  // implementation for blobs of the appropriate number of spatial dimensions
  // is available. (Currently, there is only a 2D-specific convolution
  // implementation; for input blobs with num_axes != 2, this option is
  // ignored and the ND implementation will be used.)
  optional bool force_nd_im2col = 17 [default = false];
}

2. 卷積層相關參數 

接下來,分別對卷積層的相關參數進行說明

(根據卷積層的定義,它的學習參數應該為filter的取值和bias的取值,其他的相關參數都為hyper-paramers,在定義模型時是要給出的)

lr_mult:學習率系數

放置在param{}中

該系數用來控制學習率,在進行訓練過程中,該層參數以該系數乘solver.prototxt配置文件中的base_lr的值為學習率

即學習率=lr_mult*base_lr

如果該層在結構配置文件中有兩個lr_mult,則第一個表示fitler的權值學習率系數,第二個表示偏執項的學習率系數(一般情況下,偏執項的學習率系數是權值學習率系數的兩倍)

convolution_praram:卷積層的其他參數

放置在convoluytion_param{}中

該部分對卷積層的其他參數進行設置,有些參數為必須設置,有些參數為可選(因為可以直接使用默認值)

  • 必須設置的參數

  1. num_output:該卷積層的filter個數

  2. kernel_size:卷積層的filter的大小(直接用該參數時,是filter的長寬相等,2D情況時,也可以設置為不能,此時,利用kernel_h和kernel_w兩個參數設定)
  • 其他可選的設置參數

  1. stride:filter的步長,默認值為1

  2. pad:是否對輸入的image進行padding,默認值為0,即不填充(注意,進行padding可能會帶來一些無用信息,輸入image較小時,似乎不太合適)
  3. weight_filter:權值初始化方法,使用方法如下
    weight_filter{
          type:"xavier"  //這里的xavier是一沖初始化算法,也可以是“gaussian”;默認值為“constant”,即全部為0
    }
  4. bias_filter:偏執項初始化方法
    bias_filter{
          type:"xavier"  //這里的xavier是一沖初始化算法,也可以是“gaussian”;默認值為“constant”,即全部為0
    }
  5. bias_term:是否使用偏執項,默認值為Ture

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM