Python Tensorflow下的Word2Vec代碼解釋


前言:

作為一個深度學習的重度狂熱者,在學習了各項理論后一直想通過項目練手來學習深度學習的框架以及結構用在實戰中的知識。心願是好的,但機會卻不好找。最近剛好有個項目,借此機會練手的過程中,我發現其實各大機器學習以及tensorflow框架群里的同學們也有類似的問題。於是希望借項目之手分享一點本人運行過程中的理解以及經驗,希望在有益大家工作的基礎上拋磚引玉,得到行業內各位專業人士的批評指點,多謝大家支持!

第一章博客我將會分為兩個部分,這一部分將講述Word2Vec在tensorflow中官方提供的basic版本的構造原理以及如何搭建一個CBOW模型來彌補提供版本里缺失的模型構架。於下一個部分里,我會重點對比tensorflow下basic, optimised以及gensim三個版本的Word2Vec的運行結果情況。

代碼解析:

首先,Tensorflow提供的基礎教程已經講解了什么是Word2Vec以及Tensorflow是如何構建這個網絡來訓練的。教程的地址請看這里。另外這個basic版本的代碼可以在這里找到。

代碼的結構看似混亂,其實很直白。首先,第61行限制了這個demo可以學習一共50000個不同的單詞。之后,在build_dataset(words)函數里,第65行展示了Python語言的強勁,即一行整理整個輸入。在count的UNK(也就是unknown單詞,即詞頻率少於一定數量的稀有詞的代號)后用extend函數嵌入count數為從高網低數第vocabulary_size-1個,這樣所有的重復數量少於49999個詞兒的就只能對不住了,count將會把它排擠在外。形成count后dictionary來自於對count里的詞頻進行整理,除去重復次數但換做排行順序作為這個dict結構的key。單詞本身即成為了dict結構的value。之后,將輸入的單詞轉化為他們在dictionary中的代碼以及最后,統計下輸入數據里有多少詞不在這個dictionary里,按照個數增加UNK的數量,並把dictionary函數按照由高頻到低頻的排序方法排好順序。由此,build_dataset函數成功的重建了輸入數據以及形成了代碼單詞對照表,其中data將會被用於訓練模型而dictionary將可以最為最后查詢矢量及單詞關系的翻譯本。如果大家不希望限定dictionary里的vocabulary_size怎么辦呢?其實答案很簡單。Mikolov原文里表示只要除去頻率少於3到10的詞兒就好,那么我們可以對該函數做以下修改將可以達成:

def build_dataset(words, min_cut_freq):
  count_org = [['UNK', -1]]
  count_org.extend(collections.Counter(words).most_common()) #這里我們收集全部的單詞的詞頻
  count = [['UNK', -1]]
  for word, c in count_org:
    word_tuple = [word, c]
    if word == 'UNK':   #保留UNK的位置已備后用
        count[0][1] = c
        continue
    if c > min_cut_freq: #這里定義一個para為min_cut_freq,少於這個數量的將會被咔掉
        count.append(word_tuple)
  dictionary = dict()
  for word, _ in count:
    dictionary[word] = len(dictionary)
  data = list()
  unk_count = 0
  for word in words:
    if word in dictionary:
      index = dictionary[word]
    else:
      index = 0  # dictionary['UNK']
      unk_count += 1
    data.append(index)
  count[0][1] = unk_count
  reverse_dictionary = dict(zip(dictionary.values(), dictionary.keys()))
  return data, count, dictionary, reverse_dictionary

 之后,源代碼第91行的generate_batch其實就是構建skip-gram模型的入口,而不是自第137行with graph.as_default()之后的框架。137行之后的為建立一個簡單的MLP模型以便tensor在模型里flow。而這個tensor以及其target的形式才是構建模型的要素。如果大家仔細閱讀后會發現在一個輸入為“蝙蝠俠戰勝了超人,美國隊長卻被鋼鐵俠暴打”這句中,在build_dataset函數轉換后可能蝙蝠俠被它的在dictionary中的代碼3替代,戰勝了被90替代,超人被600替代,美國隊長為58,被為77,鋼鐵俠為888以及暴打為965。於是這句話變成了[3,90,600,58,77,888,965]. 假設window size是3, 這里的模型是skip-gram,這個generate_batch函數從90出發,輸出的batch為[90,90,600,600,58,58,77,77,888,888], 輸出的target為[3,600,90,58,600,77,58,888,77,965]. 那么,如何構建CBOW模型呢?其實很簡單,注意到CBOW模型的輸入以及預測跟SkipGram正好相反,那么我們把第109行的batch和第110行的labels對調不就okay了么?具體代碼如下:

def generate_cbow_batch(batch_size, num_skips, skip_window):
  global data_index
  assert batch_size % num_skips == 0
  assert num_skips <= 2 * skip_window
  batch = np.ndarray(shape=(batch_size), dtype=np.int32)
  labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  span = 2 * skip_window + 1 # [ skip_window target skip_window ]
  buffer = collections.deque(maxlen=span)
  for _ in range(span):
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  for i in range(batch_size // num_skips):
    target = skip_window  # target label at the center of the buffer
    targets_to_avoid = [ skip_window ]
    for j in range(num_skips):
      while target in targets_to_avoid:
        target = random.randint(0, span - 1)
      targets_to_avoid.append(target)
#這里的batch和labels是skipgram模型的 #batch[i * num_skips + j] = buffer[skip_window] #labels[i * num_skips + j, 0] = buffer[target]
#這里的batch和labels是CBOW模型的,原理是對掉上面skipgram模型的兩行。 batch[i * num_skips + j] = buffer[target] labels[i * num_skips + j, 0] = buffer[skip_window] buffer.append(data[data_index]) data_index = (data_index + 1) % len(data) return batch, labels

由此,我們只需要在后面的batch_inputs, batch_labels = generate_batch(batch_size, num_skips, skip_window)函數更換函數為你的CBOW模型函數就好了。

重要更新(2016-05-21):

感謝深圳大學陳老師推薦的關於word embedding的論文How to Generate a Good Word Embedding。 文中不僅闡述了如何對詞向量的質量進行分析外,也充分介紹了不同模型間的區別。在閱讀論文時發現,Skip-Gram與CBOW模型的區別並不單單存在於其模型的輸入與輸出為顛倒狀態,還有一個比較特別的地方,在模型上,CBOW模型的輸入層為sum函數,結果為輸入矢量的加權平均值,而Skip-gram采用的是中間單詞代表環境,即one of the context owrds as the representation of the context. 在考慮了這個因素后,對比之上的generate_cbow_batch函數的代碼,我們發現的問題是batch和labels的期望輸出不應該是[3,600,90,58,600,77,58,888,77,965]和[90,90,600,600,58,58,77,77,888,888], 而應該是[[3,600], [90, 58], [600,77],[58,888],[77,965]]為輸入,[90, 600, 58, 77, 88]為輸出。如何修改generate_cbow_batch代碼做到這個呢?改動很簡單,如下:

def generate_cbow_batch(batch_size, num_skips, skip_window):
  global data_index
  assert batch_size % num_skips == 0
  assert num_skips <= 2 * skip_window
  #這里batch要作為一個2d的array,每一行代表一個詞所對應的環境
  batch = np.ndarray(shape=(batch_size, num_skips), dtype=np.int32)
  labels = np.ndarray(shape=(batch_size, 1), dtype=np.int32)
  span = 2 * skip_window + 1 # [ skip_window target skip_window ]
  buffer = collections.deque(maxlen=span)
  for _ in range(span):
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  for i in range(batch_size):
    target = skip_window  # target label at the center of the buffer
    targets_to_avoid = [ skip_window ]
    #定義一個temp的batch array作為暫時儲存環境的array,在儲存完畢后輸出
    batch_temp = np.ndarray(shape=(num_skips), dtype=np.int32)
    for j in range(num_skips):
      while target in targets_to_avoid:
        target = random.randint(0, span - 1)
      targets_to_avoid.append(target)
      batch_temp[j] = buffer[target]
    batch[i] = batch_temp
    labels[i,0] = buffer[skip_window]
    buffer.append(data[data_index])
    data_index = (data_index + 1) % len(data)
  return batch, labels

 之后,由於CBOW模型對於Skip-Gram模型結構上的不同,我們需要定義一個中間層作為加權層來疊加環境並平均答案來作為輸出,於是,對於tensorflow的skip-gram模型我們做出如下改動:

graph = tf.Graph()

with graph.as_default():
  
  # Input data.
  
  #變更1:
  #---------------------------------------------------------------------------------------------------------------
  # 這里的輸入對應的是skip-gram,input大小是batch_size X 1
  #train_inputs = tf.placeholder(tf.int32, shape=[batch_size]) 
  #這里由於我們的輸入對於每個詞而言有一個context的輸入,我們的input的大小為batch_size X context
  train_inputs = tf.placeholder(tf.int32,shape=[batch_size, skip_window * 2])
  #---------------------------------------------------------------------------------------------------------------

  train_labels = tf.placeholder(tf.int32, shape=[batch_size, 1])
  valid_dataset = tf.constant(valid_examples, dtype=tf.int32)

  # Ops and variables pinned to the CPU because of missing GPU implementation
  with tf.device('/cpu:0'):
    # Look up embeddings for inputs.
    embeddings = tf.Variable(
        tf.random_uniform([vocabulary_size, embedding_size], -1.0, 1.0))
    # Embedding size is calculated as shape(train_inputs) + shape(embeddings)[1:]
    embed = tf.nn.embedding_lookup(embeddings, train_inputs)
    
    #變更2:
    #---------------------------------------------------------------------------------------------------------------
    #這里增加的就是首先加權embed變量,然后平均。注意這個reduce_sum里的第二個para設為1
    #原因在於假設我們的batch_size是200, window_size是4, 然后詞向量size是200, 我們會得到
    #一個大小為200X4X200的張量,因為我們一次運行200個例子,每個例子有4個環境詞,然后
    #每個詞的大小為200維。但是,別忘了我們需要對這些輸入加權,我們所期待的其實是把張量
    #里4的那個維度加權起來,於是,我們需要把這個para設為1.設為0加權的是例子的200維,3加權
    #的是每個詞向量自身。
    reduced_embed = tf.div(tf.reduce_sum(embed, 1), skip_window*2)
    #---------------------------------------------------------------------------------------------------------------

    # Construct the variables for the NCE loss
    nce_weights = tf.Variable(
        tf.truncated_normal([vocabulary_size, embedding_size],
                            stddev=1.0 / math.sqrt(embedding_size)))
    nce_biases = tf.Variable(tf.zeros([vocabulary_size]))

  # Compute the average NCE loss for the batch.
  # tf.nce_loss automatically draws a new sample of the negative labels each
  # time we evaluate the loss.
  loss = tf.reduce_mean(
      tf.nn.nce_loss(nce_weights, nce_biases, reduced_embed, train_labels,
                     num_sampled, vocabulary_size))

  # Construct the SGD optimizer using a learning rate of 1.0.
  optimizer = tf.train.GradientDescentOptimizer(1.0).minimize(loss)

  # Compute the cosine similarity between minibatch examples and all embeddings.
  norm = tf.sqrt(tf.reduce_sum(tf.square(embeddings), 1, keep_dims=True))
  normalized_embeddings = embeddings / norm
  valid_embeddings = tf.nn.embedding_lookup(
      normalized_embeddings, valid_dataset)
  similarity = tf.matmul(
      valid_embeddings, normalized_embeddings, transpose_b=True)

  # Add variable initializer.
  init = tf.initialize_all_variables()

# Step 5: Begin training.
num_steps = 100001

with tf.Session(graph=graph) as session:
  # We must initialize all variables before we use them.
  init.run()
  print("Initialized")

  average_loss = 0
  for step in xrange(num_steps):
    #變更3:
    #---------------------------------------------------------------------------------------------------------------
    #在這里把generate_batch或者generate_skipgram_batch修改為generate_cbow_batch就可以了
    batch_inputs, batch_labels = generate_cbow_batch(
        batch_size, num_skips, skip_window)
    #---------------------------------------------------------------------------------------------------------------

    feed_dict = {train_inputs : batch_inputs, train_labels : batch_labels}

    # We perform one update step by evaluating the optimizer op (including it
    # in the list of returned values for session.run()
    _, loss_val = session.run([optimizer, loss], feed_dict=feed_dict)
    average_loss += loss_val

 試運行這個程序,我們得到了如下結果:

 Nearest to to: cruel, must, would, should, will, could, nigeria, captive,

 Nearest to may: can, would, could, will, might, must, should, cannot,

 Nearest to was: is, had, has, were, became, be, been, perceive,

 Nearest to into: through, delicious, from, comrades, reflexive, pellets, awarding, slowly,

 Nearest to some: many, these, any, various, several, both, their, wise,

 Nearest to that: which, meadow, how, battlefront, however, powell, animism, this,

 Nearest to also: never, still, often, actually, sometimes, usually, originally, below,

 Nearest to are: were, have, is, be, include, do, sprites, been,

 Nearest to new: nominally, dns, fermentable, final, proprietorships, aloe, junior, reservoirs,

 Nearest to their: its, his, her, the, your, some, my, whose,

 Nearest to years: decades, year, history, times, days, months, marmoset, wrangler,

 Nearest to there: they, it, she, he, these, generally, lemon, we,

 Nearest to th: eight, zero, nine, plasticizers, fairies, characteristic, documentation, anecdotes,

 Nearest to many: some, several, these, such, most, various, wise, other,

 Nearest to but: however, and, although, while, pursuing, marmoset, glowing, components,

 Nearest to see: wants, atomic, charlotte, crimson, tanaka, caius, maine, scuttled,

由此可見,該系統運行的還是可以的。其中,are對應詞有were, have, is be, include, do等,有英語基礎的朋友都了解,這些詞確實在在用法及意義上相似於are。 另外包括their在內的很多詞效果看似還是不錯的。有興趣的朋友歡迎閱讀我的源代碼


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM