ConvNets:
AlexNet finetune:
- 自己搭建的網絡,加載初始化模型:
def load_with_skip(data_path, session, skip_layer): data_dict = np.load(data_path).item() for key in data_dict: if key not in skip_layer: with tf.variable_scope(key, reuse=True): for subkey, data in zip(('weights', 'biases'), data_dict[key]): session.run(tf.get_variable(subkey).assign(data)) print('Load pre-trained model: {}'.format(weight_file)) load_with_skip(weight_file, sess, ['fc8']) # Skip weights from fc8
- https://github.com/joelthchao/tensorflow-finetune-flickr-style
- https://github.com/kratzert/finetune_alexnet_with_tensorflow
VGG模型finetune:
- 自定義網絡;加載參數,很詳細教程:
- Tensorflow學習筆記:CNN篇(9)——Finetuning,復用在ImageNet已訓練好的VGGNet進行圖像識別
- Tensorflow學習筆記:CNN篇(10)——Finetuning,貓狗大戰,VGGNet的重新針對訓練
- https://www.cs.toronto.edu/~frossard/post/vgg16/
- 另外:這是個基於tensorflow-vgg16和Caffe to TensorFlow的VGG16和VGG19的一個TensorFlow的實現。