基於各種分類算法的語音分類(年齡段識別)(續)


基於各種分類算法的語音分類(年齡段識別)

語料提取,基於分類算法進行分類

語料提取分類

TIMIT/DOC/SPKRINFO.TXT中為speaker信息,作為分類條件
定義方法def initspeakerinfo(speakerinfo),生成speaker:age字典:

def initspeakerinfo(speakerinfo):
    dict = {}
    f = open(speakerinfo,'r')
    for line in f:
        linelist = line.strip().split('  ')
        recorddate = linelist[4].strip().split('/')
        birthdata = linelist[5].strip().split('/')
        if recorddate[2]=="??" or birthdata[2]=="??":
            age = 0
        else:
            age = int(recorddate[2])*365+int(recorddate[0])*30+int(recorddate[1])-int(birthdata[2])*365+int(birthdata[0])*30+int(birthdata[1])
            age = age/365.0
        dict[linelist[1]+linelist[0]] = age
    return dict

如三分類或兩分類:

def getclass(filename,dict):
    m = filename
    if dict[m]==0:
        return "0"
    if dict[m]<=25:
        return "-1"
    elif dict[m]<=45:
        return "0"
    else:
        return "+1"

特征表示

在之前提取出了MFCC/i-vector,其中MFCC為38n矩陣形式,38是MFCC維度而n為一段語音的幀數,i-vector則是1200矩陣形式,如果要進行分類,需要對MFCC進行處理,最簡單的方法就是取38*n的均值再進行歸一化
定義方法def initavgmfcc(avgmfccname,mfccpath)讀取mfccpath路徑下的mfcc文件寫入到一個文件中,並完成均值和歸一化

def initavgmfcc(avgmfccname,mfccpath):
    f = open(avgmfccname,'w')
    for filename in os.listdir(mfccpath):

        fo = open(mfccpath+"\\"+filename,'r')
        dimen = 13
        avgmfcc = [0]*dimen
        length = 1
        for line in fo:
            linelist = line.strip().split(' ')
            for i in range(dimen):
                avgmfcc[i] = avgmfcc[i] + float(linelist[i])
            length = length + 1
        for i in range(dimen):
            avgmfcc[i] = avgmfcc[i]/length
        listmin = min(avgmfcc)
        listmax = max(avgmfcc)
        for i in range(dimen):
            avgmfcc[i] = str((avgmfcc[i]-listmin)/(listmax-listmin))
        f.write(filename+" "+" ".join(avgmfcc)+"\n")
        print filename+" avg over"
        fo.close()
    f.close()

定義方法def initiv(ivname,ivpath)讀取ivpath路徑下的i-vector文件寫入到一個文件中

def initiv(ivname,ivpath):
    f = open(ivname,'w')
    avgf = open(ivname+"avg","w")
    for filename in os.listdir(ivpath):
        fo = open(ivpath+"\\"+filename,'r')
        dimen = 200
        for line in fo:
            linelist = line.strip().split(' ')
            if(len(linelist)==dimen):
                f.write(filename+" "+" ".join(linelist)+"\n")
                avgiv = [0]*dimen
                linelist = map(eval, linelist)
                listmin = min(linelist)
                listmax = max(linelist)
                for i in range(dimen):
                    avgiv[i] = (str)((linelist[i]-listmin)/(listmax-listmin))
                avgf.write(filename+" "+" ".join(avgiv)+"\n")

        fo.close()
    f.close()
    avgf.close()

PS:https://www.zhihu.com/question/20455227 歸一化說明

LIBSVM進行分類

安裝

參考http://blog.csdn.net/lqhbupt/article/details/8599295 進行LIBSVM的安裝
PS:64位麻煩一點,但是同樣可以用nmake解決

LIBSVM格式

http://blog.csdn.net/kobesdu/article/details/8944851 介紹了LIBSVM格式和生成方法
簡單來說格式為

+1 1:0.533355514244 2:0.225956771932 3:0.551555751325 4:0.448831840291 5:0.732958158188 6:0.516967914119 ...
-1 1:0.723092649707 2:0.352547706883 3:0.524416372722 4:0.683881004712 5:0.464490812227 6:0.70279542324 ...
...

其實Python幾行就可以解決
最后定義方法def initFormat(formatname,avgmfccname,dict,dimen)生成了LIBSVM格式的

  • FormatData-iv-train
  • FormatData-iv-test
  • FormatData-mfcc-train
  • FormatData-mfcc-test

參數尋優

在libsvm-3.21/tools/grid.py中可以進行參數尋優

E:\libsvm-3.21\tools>grid.py
Usage: grid.py [grid_options] [svm_options] dataset

grid_options :
-log2c {begin,end,step | "null"} : set the range of c (default -5,15,2)
    begin,end,step -- c_range = 2^{begin,...,begin+k*step,...,end}
    "null"         -- do not grid with c
-log2g {begin,end,step | "null"} : set the range of g (default 3,-15,-2)
    begin,end,step -- g_range = 2^{begin,...,begin+k*step,...,end}
    "null"         -- do not grid with g
-v n : n-fold cross validation (default 5)
-svmtrain pathname : set svm executable path and name
-gnuplot {pathname | "null"} :
    pathname -- set gnuplot executable path and name
    "null"   -- do not plot
-out {pathname | "null"} : (default dataset.out)
    pathname -- set output file path and name
    "null"   -- do not output file
-png pathname : set graphic output file path and name (default dataset.png)
-resume [pathname] : resume the grid task using an existing output file (default pathname is dataset.out)
    This is experimental. Try this option only if some parameters have been checked for the SAME data.

option如上
用以求參數C和gamma
http://m.blog.csdn.net/article/details?id=46386201
參數尋優的原理是交叉驗證-v n分為n份
依次取其中n-1份為訓練集,1份為測試集,參數C和gamma在

-log2c {begin,end,step | "null"} : set the range of c (default -5,15,2)
    begin,end,step -- c_range = 2^{begin,...,begin+k*step,...,end}
    "null"         -- do not grid with c
-log2g {begin,end,step | "null"} : set the range of g (default 3,-15,-2)
    begin,end,step -- g_range = 2^{begin,...,begin+k*step,...,end}
    "null"         -- do not grid with g

區間內
然后更換訓練集和測試集做簡單的枚舉,設C區間內有numC個取值,gamma區間內有numG個取值,則總共進行numC*numG*n次測試,會輸出每一次的結果:准確率accuracy,取最高accuracy時的C和gamma作為參數尋優的結果

LIBSVM訓練和預測

train_y, train_x = svm_read_problem('../FormatData-train')
test_y, test_x = svm_read_problem('../FormatData-test')

model = svm_train(train_y,train_x,'-c 112.0 -g 0.000125')
p_label, p_acc, p_val = svm_predict(test_y,test_x, model)

scikit-learn進行分類

scikit-learn是python的一個第三方庫
分類方法眾多,調用簡單,需要預先了解分類方法/Python/numpy

LDA/PLDA/PCA處理

scikit-learn還提供LDA處理,所以之前的LIBSVM可以升級為

from svmutil import *
from sklearn.lda import LDA
#read the data(mfcc/ivectr/LDA-ivector)

train_y, train_x = svm_read_problem('../FormatData-mfcc-train')
test_y, test_x = svm_read_problem('../FormatData-mfcc-test')

clf = LDA(solver='eigen',n_components=100)
train_x2 = clf.fit(train_x,train_y).transform(train_x)
test_x2 = clf.fit(train_x,train_y).transform(test_x)

model = svm_train(train_y2,train_x2,'-c 8192.0 -g 0.05')

scikit-learn分類

#!usr/bin/env python
#-*- coding: utf-8 -*-
import sys
import os
import time
from sklearn import metrics
import numpy as np
import cPickle as pickle
from sklearn.datasets import load_svmlight_file
import numpy
from sklearn.lda import LDA
from sklearn.decomposition import PCA
reload(sys)
sys.setdefaultencoding('utf8')

# Multinomial Naive Bayes Classifier
def naive_bayes_classifier(train_x, train_y):
    from sklearn.naive_bayes import MultinomialNB
    model = MultinomialNB(alpha=0.01)
    model.fit(train_x, train_y)
    return model


# KNN Classifier
def knn_classifier(train_x, train_y):
    from sklearn.neighbors import KNeighborsClassifier
    model = KNeighborsClassifier()
    model.fit(train_x, train_y)
    return model


# Logistic Regression Classifier
def logistic_regression_classifier(train_x, train_y):
    from sklearn.linear_model import LogisticRegression
    model = LogisticRegression(penalty='l2')
    model.fit(train_x, train_y)
    return model


# Random Forest Classifier
def random_forest_classifier(train_x, train_y):
    from sklearn.ensemble import RandomForestClassifier
    model = RandomForestClassifier(n_estimators=8)
    model.fit(train_x, train_y)
    return model


# Decision Tree Classifier
def decision_tree_classifier(train_x, train_y):
    from sklearn import tree
    model = tree.DecisionTreeClassifier()
    model.fit(train_x, train_y)
    return model


# GBDT(Gradient Boosting Decision Tree) Classifier
def gradient_boosting_classifier(train_x, train_y):
    from sklearn.ensemble import GradientBoostingClassifier
    model = GradientBoostingClassifier(n_estimators=200)
    model.fit(train_x, train_y)
    return model


# SVM Classifier
def svm_classifier(train_x, train_y):
    from sklearn.svm import SVC
    model = SVC(kernel='rbf', probability=True)
    model.fit(train_x, train_y)
    return model

# SVM Classifier using cross validation
def svm_cross_validation(train_x, train_y):
    from sklearn.grid_search import GridSearchCV
    from sklearn.svm import SVC
    model = SVC(kernel='rbf', probability=True)
    param_grid = {'C': [1e-3, 1e-2, 1e-1, 1, 10, 100, 1000], 'gamma': [0.001, 0.0001]}
    grid_search = GridSearchCV(model, param_grid, n_jobs = 1, verbose=1)
    grid_search.fit(train_x, train_y)
    best_parameters = grid_search.best_estimator_.get_params()
    for para, val in best_parameters.items():
        print para, val
    model = SVC(kernel='rbf', C=best_parameters['C'], gamma=best_parameters['gamma'], probability=True)
    model.fit(train_x, train_y)
    return model

def read_data(data_file):
    f = open(data_file+"-train")
    x = []
    y = []
    for line in f:
        linelist = line.strip().split(' ')
        linelist = map(eval, linelist)
        x.append(linelist[1:])
        y.append(linelist[0])
    x1 = np.array(x)
    y1 = np.array(y)
    ff = open(data_file+"-test")
    xx = []
    yy = []
    for line in ff:
        linelist = line.strip().split(' ')
        linelist = map(eval, linelist)
        xx.append(linelist[1:])
        yy.append(linelist[0])
    x2 = np.array(xx)
    y2 = np.array(yy)
    train_x = x1
    train_y = y1
    test_x = x2
    test_y = y2
    #return x1[:trainlen],y1[:trainlen],x1[trainlen:],y1[trainlen:]
    return train_x, train_y, test_x, test_y


if __name__ == '__main__':
    data_file = "./data/FormatData-mfcc"
    thresh = 0.5
    model_save_file = None
    model_save = {}

    test_classifiers = ['KNN', 'LR', 'RF', 'DT', 'SVM', 'GBDT']
    classifiers = {#'NB':naive_bayes_classifier,
                  'KNN':knn_classifier,
                   'LR':logistic_regression_classifier,
                   'RF':random_forest_classifier,
                   'DT':decision_tree_classifier,
                  'SVM':svm_classifier,
                'SVMCV':svm_cross_validation,
                 'GBDT':gradient_boosting_classifier
    }

    print 'reading training and testing data...'
    train_x, train_y, test_x, test_y = read_data(data_file)
    num_train, num_feat = train_x.shape
    num_test, num_feat = test_x.shape
    is_binary_class = (len(np.unique(train_y)) == 2)
    print is_binary_class
    print '******************** Data Info *********************'
    print '#training data: %d, #testing_data: %d, dimension: %d' % (num_train, num_test, num_feat)

    for classifier in test_classifiers:
        print '******************* %s ********************' % classifier
        start_time = time.time()
        model = classifiers[classifier](train_x, train_y)
        print 'training took %fs!' % (time.time() - start_time)
        predict = model.predict(test_x)
        if model_save_file != None:
            model_save[classifier] = model
        if is_binary_class:
            precision = metrics.precision_score(test_y, predict)
            recall = metrics.recall_score(test_y, predict)
            print 'precision: %.2f%%, recall: %.2f%%' % (100 * precision, 100 * recall)
        accuracy = metrics.accuracy_score(test_y, predict)
        print 'accuracy: %.2f%%' % (100 * accuracy)

    if model_save_file != None:
        pickle.dump(model_save, open(model_save_file, 'wb'))


grid_search = GridSearchCV(classifiers,)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM