other_techniques_for_regularization
隨手翻譯,略作參考,禁止轉載
www.cnblogs.com/santian/p/5457412.html
Dropout: Dropout is a radically different technique for regularization. Unlike L1 and L2 regularization, dropout doesn't rely on modifying the cost function. Instead, in dropout we modify the network itself. Let me describe the basic mechanics of how dropout works, before getting into why it works, and what the results are.
Suppose we're trying to train a network:
Dropout 技術:Dropout是一個同正則化完全不同的技術,與L1和L2范式正則化不同。dropout並不會修改代價函數而是修改深度網絡本身。在我描述dropout的工作機制和dropout導致何種結果前,讓我們假設我們正在訓練如下一個網絡。

In particular, suppose we have a training input xx and corresponding desired output yy. Ordinarily, we'd train by forward-propagating xxthrough the network, and then backpropagating to determine the contribution to the gradient. With dropout, this process is modified. We start by randomly (and temporarily) deleting half the hidden neurons in the network, while leaving the input and output neurons untouched. After doing this, we'll end up with a network along the following lines. Note that the dropout neurons, i.e., the neurons which have been temporarily deleted, are still ghosted in:
特別的。假設我們有一個輸入xx並且相關的輸入yy的訓練。通常的我們將首先通過前饋網絡把xx輸入我們隨機初始化權重后的網絡。然后反向傳播拿到對梯度的影響。也就是根據誤差,根據鏈式法則反向拿到對相應權重的偏微分。但是,使用dropout技術的話。相關的處理就完全不同了。在開始訓練的時候我們隨機的(臨時)刪除一般的神經元。但是輸入層和輸出層不做變動。對深度網絡dropout后。我們將會得到下圖中這樣類似的網絡。注意。下圖中的虛線存在的網絡就是我們臨時刪除的。

We forward-propagate the input xx through the modified network, and then backpropagate the result, also through the modified network. After doing this over a mini-batch of examples, we update the appropriate weights and biases. We then repeat the process, first restoring the dropout neurons, then choosing a new random subset of hidden neurons to delete, estimating the gradient for a different mini-batch, and updating the weights and biases in the network.
我們前向傳播輸入項xx通過修改后的網絡。然后反向傳播拿到的結果通過修改后的網絡。對此昨晚一個樣本化的迷你批次的樣本后。我們更新相應的權重和偏置。這樣重復迭代處理。首先存儲dropout的神經元,然后選擇一個新的隨機隱層神經元的子集去刪除。估計不同樣本批次的梯度。最后更新網絡的權重和偏置。
By repeating this process over and over, our network will learn a set of weights and biases. Of course, those weights and biases will have been learnt under conditions in which half the hidden neurons were dropped out. When we actually run the full network that means that twice as many hidden neurons will be active. To compensate for that, we halve the weights outgoing from the hidden neurons.
通過不斷的重復處理。我們的網絡將會學到一系列的權重和偏置參數。當然這些參數是在一半的隱層神經元被dropped out(臨時刪除的)情況下學習到的。當我們真正的運行整個神經網絡的時候意味着兩倍多的隱層神經元將被激活。為了抵消此影響。我將從隱層的權重輸出減半。
This dropout procedure may seem strange and ad hoc. Why would we expect it to help with regularization? To explain what's going on, I'd like you to briefly stop thinking about dropout, and instead imagine training neural networks in the standard way (no dropout). In particular, imagine we train several different neural networks, all using the same training data. Of course, the networks may not start out identical, and as a result after training they may sometimes give different results. When that happens we could use some kind of averaging or voting scheme to decide which output to accept. For instance, if we have trained five networks, and three of them are classifying a digit as a "3", then it probably really is a "3". The other two networks are probably just making a mistake. This kind of averaging scheme is often found to be a powerful (though expensive) way of reducing overfitting. The reason is that the different networks may overfit in different ways, and averaging may help eliminate that kind of overfitting.
dropout處理看起來是奇怪並且沒有規律的。為什么我們希望他對正則化有幫助呢。來解釋dropout到底發生了什么。我們先不要思考dropout技術。而是想象我們用一個正常的方式訓練一個神經網絡。特別的。假設我們訓練了幾個完全不同的神經網絡。用的是完全相同的訓練數據。當然了。因為隨機初始化參數或其他原因。訓練得到的結果也許是不同的。當這種情況發生的時候,我們就可以平均這幾種網絡的結果,或者根據相應的規則決定使用哪一種神經網絡輸出的結果。例如。如果我們訓練了五個網絡。其中三個分類一個數字為3,最終的結果就是他是3的可能性更大一些。其他的兩個網絡也許有些錯誤。這種平均的架構被發現通常是十分有用的來減少過擬合。(當然這種訓練多個網絡的代價也是昂貴的。)出現這種結果的原因就是不同的網絡也是在不同的方式上過你和。通過平均可以排除掉這種過擬合的。
What's this got to do with dropout? Heuristically, when we dropout different sets of neurons, it's rather like we're training different neural networks. And so the dropout procedure is like averaging the effects of a very large number of different networks. The different networks will overfit in different ways, and so, hopefully, the net effect of dropout will be to reduce overfitting.
這種現象與dropout這種技術有什么作用的。啟發式的我們發現。dropout不同設置的神經元和我們訓練幾種不同的神經網絡很像。因此,dropout處理很像是平均一個大量不同網絡的平均結果。不同的網絡在不同的情況下過擬合。因此,很大程度上。dropout將會減少這種過擬合。
A related heuristic explanation for dropout is given in one of the earliest papers to use the technique(**ImageNet Classification with Deep Convolutional Neural Networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012).): "This technique reduces complex co-adaptations of neurons, since a neuron cannot rely on the presence of particular other neurons. It is, therefore, forced to learn more robust features that are useful in conjunction with many different random subsets of the other neurons." In other words, if we think of our network as a model which is making predictions, then we can think of dropout as a way of making sure that the model is robust to the loss of any individual piece of evidence. In this, it's somewhat similar to L1 and L2 regularization, which tend to reduce weights, and thus make the network more robust to losing any individual connection in the network.
一個相關的早期使用這種技術的論文((**ImageNet Classification with Deep Convolutional Neural Networks, by Alex Krizhevsky, Ilya Sutskever, and Geoffrey Hinton (2012).))中啟發性的dropout解釋是:這種技術減少了神經元之間復雜的共適性。因為一個神經元不能依賴其他特定的神經元。因此,不得不去學習隨機子集神經元間的魯棒性的有用連接。換句話說。想象我們的神經元作為要給預測的模型,dropout是一種方式可以確保我們的模型在丟失一個個體線索的情況下保持健壯的模型。在這種情況下,可以說他的作用和L1和L2范式正則化是相同的。都是來減少權重連接,然后增加網絡模型在缺失個體連接信息情況下的魯棒性。
Of course, the true measure of dropout is that it has been very successful in improving the performance of neural networks. The original paper(**Improving neural networks by preventing co-adaptation of feature detectors by Geoffrey Hinton, Nitish Srivastava, Alex Krizhevsky, Ilya Sutskever, and Ruslan Salakhutdinov (2012). Note that the paper discusses a number of subtleties that I have glossed over in this brief introduction.) introducing the technique applied it to many different tasks. For us, it's of particular interest that they applied dropout to MNIST digit classification, using a vanilla feedforward neural network along lines similar to those we've been considering. The paper noted that the best result anyone had achieved up to that point using such an architecture was 98.498.4 percent classification accuracy on the test set. They improved that to 98.798.7 percent accuracy using a combination of dropout and a modified form of L2 regularization. Similarly impressive results have been obtained for many other tasks, including problems in image and speech recognition, and natural language processing. Dropout has been especially useful in training large, deep networks, where the problem of overfitting is often acute.
當然,真正使dropout作為一個強大工具的原因是它在提高神經網絡的表現方面是非常成功的。原始的dropout被發現的論文()介紹了這種技術對不同任務執行的結果。對我們來說。我們對dropout這種技術對手寫字識別的提升特別感興趣。用一個毫無新意的前饋神經網絡。論文表明最好的結果實現的是98.4984的正確率。通過使用dropout和L2范式正則化。正確率提升到了98.7987.同樣顯著的效果也在其他任務中得到了體現。包括圖像識別,語音識別,自然語言處理。大型深度網絡過擬合現象很突出。dropout在訓練大型的深度網絡的時候在解決過擬合問題的非常有用。