xLearn源碼分析之FM的CalcScore實現


寫在前面

xLearn是由Chao Ma實現的一個高效的機器學習算法庫,這里附上github地址:

https://github.com/aksnzhy/xlearn

FM是機器學習中一個在CTR領域中表現突出的模型,最早由Konstanz大學Steffen Rendle(現任職於Google)於2010年最早提出。

FM模型

FM的模型方程為:

\[y(\mathbf{x}) = w_0+ \sum_{i=1}^n w_i x_i + \sum_{i=1}^n \sum_{j=i+1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j​ \]

直觀上看,FM的復雜度是 \(O(kn^2)\),但是,FM的二次項可以化簡,其復雜度可以優化到 \(O(kn)\)。論文中簡化如下式:

\[\sum_{i=1}^n \sum_{j=i+1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j = \frac{1}{2} \sum_{f=1}^k \left(\left( \sum_{i=1}^n v_{i, f} x_i \right)^2 - \sum_{i=1}^n v_{i, f}^2 x_i^2 \right)​ \]

這里記錄一下具體推導過程:

\[\sum_{i=1}^n \sum_{j=i+1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j ​ \]

\[= \frac{1}{2}\left(\sum_{i=1}^n \sum_{j=1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j - \sum_{i=1}^n \langle \mathbf{v}_i, \mathbf{v}_i \rangle x_i x_i \right)​ \]

\[= \frac{1}{2}\left(\sum_{i=1}^n \sum_{j=1}^n \sum_{f=1}^k v_{if} v_{jf} x_i x_j - \sum_{i=1}^n \sum_{f=1}^k v_{if} v_{jf} x_i x_i \right)​ \]

\[= \frac{1}{2} \sum_{f=1}^k \left(\sum_{i=1}^n \sum_{j=1}^n v_{if} v_{jf} x_i x_j - \sum_{i=1}^n v_{if} v_{if} x_i x_i \right)​ \]

\[= \frac{1}{2}\sum_{f=1}^k\left( \left(\sum_{i=1}^nv_{if} x_i \right) \left( \sum_{j=1}^n v_{jf} x_j \right) - \sum_{i=1}^n \left(v_{if} x_i\right)^2 \right)​ \]

\[= \frac{1}{2}\sum_{f=1}^k\left( \left(\sum_{i=1}^nv_{if} x_i \right)^2 - \sum_{i=1}^n \left(v_{if} x_i\right)^2 \right)​ \]

\[= \frac{1}{2} \sum_{f=1}^k \left(\left( \sum_{i=1}^n v_{i, f} x_i \right)^2 - \sum_{i=1}^n v_{i, f}^2 x_i^2 \right)​ \]

xLearn的CalcScore實現

// y = sum( (V_i*V_j)(x_i * x_j) )
// Using SSE to accelerate vector operation.
// row為libsvm格式的樣本,采用vector稀疏存儲的Node,Node里有feat_id及feat_val
// model為模型
real_t FMScore::CalcScore(const SparseRow* row,
                          Model& model,
                          real_t norm) {
  /*********************************************************
   *  linear term and bias term                            *
   *********************************************************/
  real_t sqrt_norm = sqrt(norm);
  real_t *w = model.GetParameter_w();
  index_t num_feat = model.GetNumFeature();
  real_t t = 0;
  index_t aux_size = model.GetAuxiliarySize();
  for (SparseRow::const_iterator iter = row->begin();
       iter != row->end(); ++iter) {
    index_t feat_id = iter->feat_id;
    // To avoid unseen feature in Prediction
    if (feat_id >= num_feat) continue;
    //計算線性部分x_i*w_i,求和到t
    t += (iter->feat_val * w[feat_id*aux_size] * sqrt_norm);
  }
  // bias
  // 偏置w_0,加到t
  w = model.GetParameter_b();
  t += w[0];
  /*********************************************************
   *  latent factor                                        *
   *********************************************************/
  //隱向量長度調整為4的整數倍aligned_k
  index_t aligned_k = model.get_aligned_k();
  index_t align0 = model.get_aligned_k() * aux_size;
  std::vector<real_t> sv(aligned_k, 0);
  real_t* s = sv.data();
  for (SparseRow::const_iterator iter = row->begin();
       iter != row->end(); ++iter) {
    index_t j1 = iter->feat_id;
    // To avoid unseen feature in Prediction
    if (j1 >= num_feat) continue;
    real_t v1 = iter->feat_val;//x_i
    real_t *w = model.GetParameter_v() + j1 * align0;//v_i
    //SSE指令,x_i存儲於128位的寄存器中
    __m128 XMMv = _mm_set1_ps(v1*norm);//x_i
    //循環每次移動4個長度,4個float正好128位,一次循環計算4個浮點數
    for (index_t d = 0; d < aligned_k; d += kAlign) {
      __m128 XMMs = _mm_load_ps(s+d);
      __m128 const XMMw = _mm_load_ps(w+d);//v_i
      //計算v_if * x_i,並按i求和,結果是一個k維向量sv
      XMMs = _mm_add_ps(XMMs, _mm_mul_ps(XMMw, XMMv));
      _mm_store_ps(s+d, XMMs);
    }
  }
  __m128 XMMt = _mm_set1_ps(0.0f);
  for (SparseRow::const_iterator iter = row->begin();
       iter != row->end(); ++iter) {
    index_t j1 = iter->feat_id;
    // To avoid unseen feature in Prediction
    if (j1 >= num_feat) continue;
    real_t v1 = iter->feat_val;//x_i
    real_t *w = model.GetParameter_v() + j1 * align0;//v_i
    //SSE指令,x_i存儲於128位的寄存器中
    __m128 XMMv = _mm_set1_ps(v1*norm);
    for (index_t d = 0; d < aligned_k; d += kAlign) {
      __m128 XMMs = _mm_load_ps(s+d);
      __m128 XMMw = _mm_load_ps(w+d);//v_i
      __m128 XMMwv = _mm_mul_ps(XMMw, XMMv);//v_if * x_i
      XMMt = _mm_add_ps(XMMt,
         _mm_mul_ps(XMMwv, _mm_sub_ps(XMMs, XMMwv)));
    }
  }
  XMMt = _mm_hadd_ps(XMMt, XMMt);
  XMMt = _mm_hadd_ps(XMMt, XMMt);
  real_t t_all;
  _mm_store_ss(&t_all, XMMt);
  t_all *= 0.5;
  t_all += t;
  return t_all;
}

在xLearn中的實現,並非是論文的簡化后的公式,具體如下:

\[\sum_{i=1}^n \sum_{j=i+1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j ​ \]

\[= \frac{1}{2}\left(\sum_{i=1}^n \sum_{j=1}^n \langle \mathbf{v}_i, \mathbf{v}_j \rangle x_i x_j - \sum_{i=1}^n \langle \mathbf{v}_i, \mathbf{v}_i \rangle x_i x_i \right)​ \]

\[= \frac{1}{2}\left(\sum_{i=1}^n \sum_{j=1}^n \sum_{f=1}^k v_{if} v_{jf} x_i x_j - \sum_{i=1}^n \sum_{f=1}^k v_{if} v_{jf} x_i x_i \right)​ \]

\[= \frac{1}{2} \sum_{f=1}^k \left(\sum_{i=1}^n \sum_{j=1}^n v_{if} v_{jf} x_i x_j - \sum_{i=1}^n v_{if} v_{if} x_i x_i \right)​ \]

\[= \frac{1}{2}\sum_{f=1}^k\left( \sum_{i=1}^nv_{if} x_i\left( \sum_{j=1}^n v_{jf} x_j - v_{if} x_i\right ) \right)​ \]

第一個for循環是計算\(\sum_{j=1}^n v_{jf} x_j\),注意下標是j,結果存於sv的vector中,內層嵌套for循環並沒有做求和操作,只是遍歷的隱向量。

第二個for循環是計算\(\sum_{i=1}^n\),注意下標是i,內層嵌套for循環是計算\(\sum_{f=1}^k\),被兩層循環計算求和的單元是

\[v_{if}x_i\left(\sum_{j=1}^nv_{jf} x_j-v_{if}x_i\right ) \]

用到了前面循環的中間結果,結果存於XMMt,由於內層循環是以4個浮點數同時做計算,所以結果最后需要將這個四個浮點數加起來,調用了兩次_mm_hadd_ps實現。

參考鏈接:

http://www.algo.uni-konstanz.de/members/rendle/pdf/Rendle2010FM.pdf

https://tech.meituan.com/2016/03/03/deep-understanding-of-ffm-principles-and-practices.html


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM