多目標損失中權重學習


最近在想把collaborative learning做成類似Federated learning可以各個設備獨立計算,僅交互參數的東西,打算參考Multi-task的框架。順路就把Multi-task的東西看了看,然后發現了類似agnostic model或者average model的paper,paper作者還提供了一個tony example,自己代碼生疏,正好熟悉一下

Multi-task

自己不熟悉Multi-task,這里也不做總結和梳理,只講和paper思路、code相關的。

通常,優化目標是一個標量,如果引入多個任務,那么目標函數就變成了一個向量。使向量變成標量最簡單的辦法就是將各個任務加權平均,這個系數\(w\)是一個hyper-parameter,不好調整的。average model或者agnostic model做的就是讓模型自己學習這個參數\(w\),這也是這篇文章的思路,不過這篇文章我覺得更好的一點在於對\(w\)還有正則項,即原文中的公式(10)

\[\frac{1}{2\sigma^2_1}\mathcal{L}_1({\rm W})+\frac{1}{\sigma_2^2}\mathcal{L}_2({\rm W})+\underbrace{\log(\sigma_1\sigma_2)}_{regularization} \]

\(\sigma\)增大的時候,對應的\(\mathcal{L}\)權重減小,同時最后一個正則項限制了\(\sigma\)的無限制增大。

我最近看過4~5篇這樣學習權重的idea,有叫adaptive的,有叫agnostic的、還有叫average的,還有在華為2018年前一篇meta-learning中對MAML學習率做調整動態調整的,總之差不多都一個意思。

(雖然我之前很不喜歡這種小修小補,但是我連小修小補都做不出來,/(ㄒoㄒ)/~~!九層之台,起於壘土,挖坑也是從填坑開始的!)

code

作者的code使用keras寫的,通過查閱keras目前定義函數loss的方式僅允許輸入y_prey_tru兩項,而這篇文章的loss又是比較復雜,依賴於\(\sigma\),而且\(\sigma\)也是要學習的參數。因此作者通過定義一個參數層來實現

from keras.layers import Input, Dense, Lambda, Layer
from keras.initializers import Constant
from keras.models import Model
from keras import backend as K

# Custom loss layer
# Inherit from Layer. Must have build and call function
class CustomMultiLossLayer(Layer):
    def __init__(self, nb_outputs=2, **kwargs):
        self.nb_outputs = nb_outputs
        self.is_placeholder = True
        super(CustomMultiLossLayer, self).__init__(**kwargs)
        
    def build(self, input_shape=None):
        # initialise log_vars
        # define learning parameters by add_weight function(set trainable=True)
        self.log_vars = []
        for i in range(self.nb_outputs):
            self.log_vars += [self.add_weight(name='log_var' + str(i), shape=(1,),
                                              initializer=Constant(0.), trainable=True)]
        super(CustomMultiLossLayer, self).build(input_shape)
	
    def multi_loss(self, ys_true, ys_pred):
        # Because kera loss function only support input y_true and y_pred
        # this complex function use class attributes to program loss
        assert len(ys_true) == self.nb_outputs and len(ys_pred) == self.nb_outputs
        loss = 0
        for y_true, y_pred, log_var in zip(ys_true, ys_pred, self.log_vars):
            precision = K.exp(-log_var[0])
            loss += K.sum(precision * (y_true - y_pred)**2. + log_var[0], -1)
        return K.mean(loss)

    def call(self, inputs):
        ys_true = inputs[:self.nb_outputs]
        ys_pred = inputs[self.nb_outputs:]
        loss = self.multi_loss(ys_true, ys_pred)
        self.add_loss(loss, inputs=inputs)  # adding loss to class _loss attribute
        # We won't actually use the output.
        return K.concatenate(inputs, -1)
def get_prediction_model():
    inp = Input(shape=(Q,), name='inp')
    x = Dense(nb_features, activation='relu')(inp)
    y1_pred = Dense(D1)(x)
    y2_pred = Dense(D2)(x)
    return Model(inp, [y1_pred, y2_pred])

def get_trainable_model(prediction_model):
    inp = Input(shape=(Q,), name='inp')
    y1_pred, y2_pred = prediction_model(inp)
    y1_true = Input(shape=(D1,), name='y1_true')
    y2_true = Input(shape=(D2,), name='y2_true')
    out = CustomMultiLossLayer(nb_outputs=2)([y1_true, y2_true, y1_pred, y2_pred])
    return Model([inp, y1_true, y2_true], out)

prediction_model = get_prediction_model()
trainable_model = get_trainable_model(prediction_model)
trainable_model.compile(optimizer='adam', loss=None)
assert len(trainable_model.layers[-1].trainable_weights) == 2  # two log_vars, one for each output
assert len(trainable_model.losses) == 1

作者通過自己定義一個損失函數層來實現complex loss,后來我在知乎上找到了一篇講解keras如何做custom loss的文章,主要代碼貼在這里

class WbceLoss(KL.Layer):
    def __init__(self, **kwargs):
        super(WbceLoss, self).__init__(**kwargs)

    def call(self, inputs, **kwargs):
        """
        # inputs:Input tensor, or list/tuple of input tensors.
        如上,父類KL.Layer的call方法明確要求inputs為一個tensor,或者包含多個tensor的列表/元組
        所以這里不能直接接受多個入參,需要把多個入參封裝成列表/元組的形式然后在函數中自行解包,否則會報錯。
        """
        # 解包入參
        y_true, y_weight, y_pred = inputs
        # 復雜的損失函數
        bce_loss = K.binary_crossentropy(y_true, y_pred)
        wbce_loss = K.mean(bce_loss * y_weight)
        # 重點:把自定義的loss添加進層使其生效,同時加入metric方便在KERAS的進度條上實時追蹤
        self.add_loss(wbce_loss, inputs=True)
        self.add_metric(wbce_loss, aggregation="mean", name="wbce_loss")
        return wbce_loss
    
def my_model():
    # input layers
    input_img = KL.Input([64, 64, 3], name="img")
    input_lbl = KL.Input([64, 64, 1], name="lbl")
    input_weight = KL.Input([64, 64, 1], name="weight")
    
    predict = KL.Conv2D(2, [1, 1], padding="same")(input_img)
    my_loss = WbceLoss()([input_lbl, input_weight, predict])

    model = KM.Model(inputs=[input_img, input_lbl, input_weight], outputs=[predict, my_loss])
    model.compile(optimizer="adam")
    return model

參考資料

  1. Github: yaringal, multi-task-learning-example
  2. Multi-Task Learning Using Uncertainty to Weigh Losses for Scene Geometry and Semantics
  3. [知乎: Ziyigogogo, Tensorflow2.0中復雜損失函數實現](


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM