CNN 文本分類模型優化經驗——關鍵點:加卷積層和FC可以提高精度,在FC前加BN可以加快收斂,有時候可以提高精度,FC后加dropout,conv_1d的input維度加大可以提高精度,但是到256會出現OOM。


    network = tflearn.input_data(shape=[None, max_len], name='input')
    network = tflearn.embedding(network, input_dim=volcab_size, output_dim=32)

    network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    network = max_pool_1d(network, 2)
    network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    network = max_pool_1d(network, 2)
    #network = conv_1d(network, 64, 3, activation='relu', regularizer="L2")
    #network = max_pool_1d(network, 2)

    network = batch_normalization(network)

    #network = fully_connected(network, 512, activation='relu')
    #network = dropout(network, 0.5)
    network = fully_connected(network, 64, activation='relu')
    network = dropout(network, 0.5)

    network = fully_connected(network, 2, activation='softmax')

迭代一次,acc是98.5%多一點。

如果使用:

# 關於一維CNN的網絡,例子較少
# https://github.com/tflearn/tflearn/blob/master/examples/nlp/cnn_sentence_classification.py
# Building convolutional network
network = input_data(shape=[None, 100], name='input')
network = tflearn.embedding(network, input_dim=10000, output_dim=128)
branch1 = conv_1d(network, 128, 3, padding='valid', activation='relu', regularizer="L2")
branch2 = conv_1d(network, 128, 4, padding='valid', activation='relu', regularizer="L2")
branch3 = conv_1d(network, 128, 5, padding='valid', activation='relu', regularizer="L2")
network = merge([branch1, branch2, branch3], mode='concat', axis=1)
network = tf.expand_dims(network, 2)
network = global_max_pool(network)
network = dropout(network, 0.5)
network = fully_connected(network, 2, activation='softmax')
network = regression(network, optimizer='adam', learning_rate=0.001,
                     loss='categorical_crossentropy', name='target')
# Training
model = tflearn.DNN(network, tensorboard_verbose=0)

acc是95%多一點點。

使用類似 vgg的模型, https://github.com/AhmetHamzaEmra/tflearn/blob/master/examples/images/VGG19.py

    network = tflearn.input_data(shape=[None, max_len], name='input')
    network = tflearn.embedding(network, input_dim=volcab_size, output_dim=64)
    network = conv_1d(network, 64, 3, activation='relu')
    network = conv_1d(network, 64, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = conv_1d(network, 128, 3, activation='relu')
    network = conv_1d(network, 128, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = conv_1d(network, 256, 3, activation='relu')
    network = conv_1d(network, 256, 3, activation='relu')
    network = conv_1d(network, 256, 3, activation='relu')
    network = max_pool_1d(network, 2, strides=2)
    network = batch_normalization(network)
    network = fully_connected(network, 512, activation='relu')
    network = dropout(network, 0.5)
    network = fully_connected(network, 2, activation='softmax')

acc是98.5%多一點,稍微比第一種模型高,但是訓練時間太長。

其他的,本質上都是加卷積層或者FC:

。。。   
network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2) network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2) network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = conv_1d(network, 64, 3, activation='relu', regularizer="L2") network = max_pool_1d(network, 2)
。。。

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM