數據中標簽的含義:
- PassengerId => 乘客ID
- Pclass => 乘客等級(1/2/3等艙位)
- Name => 乘客姓名
- Sex => 性別
- Age => 年齡
- SibSp => 堂兄弟/妹個數
- Parch => 父母與小孩個數
- Ticket => 船票信息
- Fare => 票價
- Cabin => 客艙
- Embarked => 登船港口
將數據進行描述讀取
import pandas titanic = pandas.read_csv('titanic_train.csv') print(titanic.describe())
發現Age中有缺失值,使用平均值填補缺失值
titanic['Age'] = titanic['Age'].fillna(titanic['Age'].median()) print(titanic.describe())
將字符型的值如,性別,上船地點,進行數值替換
print(titanic['Sex'].unique()) titanic.loc[titanic['Sex'] == 'male','Sex'] = 0 titanic.loc[titanic['Sex'] == 'female','Sex'] = 1 print(titanic['Embarked'].unique()) titanic['Embarked'] = titanic['Embarked'].fillna('S') titanic.loc[titanic['Embarked'] == 'S','Embarked'] = 0 titanic.loc[titanic['Embarked'] == 'C','Embarked'] = 1 titanic.loc[titanic['Embarked'] == 'Q','Embarked'] = 2
將Survived:獲救與否,作為label值,引入交叉驗證后,將label值與特征進行回顧分析
from sklearn.linear_model import LinearRegression from sklearn.cross_validation import KFold predictors = ['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked'] alg = LinearRegression() #使用Kfold將樣本的訓練集做一個3倍的交叉驗證 kf = KFold(titanic.shape[0],n_folds = 3,random_state = 1) #在每次交叉驗證中建立回歸模型 predictions = [] for train,test in kf: #取出訓練集中的船員特征屬性 train_predictors = (titanic[predictors].iloc[train,:]) #取出訓練集中的是否獲救的結果 train_target = titanic['Survived'].iloc[train] #將線性回歸應用到數據 alg.fit(train_predictors,train_target) #運行測試結果 test_predictions = alg.predict(titanic[predictors].iloc[test,:]) #將結果收集 predictions.append(test_predictions)
調用numpy將測試結果(獲救概率)以50%為界做二分類,並將預測結果與真實結果比較得出正確率
import numpy as np #調用數組操作函數 predictions = np.concatenate(predictions,axis=0) #將輸出的0到1區間內的結果以0.5作為分界點做二級分化 predictions[predictions > .5] = 1 predictions[predictions <= .5] = 0 #將預測輸出的結果與訓練集中的真實結果進行正確率比較 accuracy = sum(predictions ==titanic['Survived']) / len(predictions) print(accuracy)
0.7833894500561167
嘗試使用隨機森林的方法,看正確率是否能提升
from sklearn import cross_validation from sklearn.ensemble import RandomForestClassifier #導入特征集合 predictors = ['Pclass','Sex','Age','SibSp','Parch','Fare','Embarked'] #創建隨機森林 決策樹數量 為10個,停止條件為最小樹枝為2或最小葉子數為一 alg = RandomForestClassifier(random_state = 1,n_estimators = 10,min_samples_split = 2,min_samples_leaf = 1) #再進行一次交叉檢驗 kf =cross_validation.KFold(titanic.shape[0],n_folds = 3,random_state = 1) #進行模型評估 分類器為隨機森林,數據為船員特征,目標為生存率,參數為交叉檢驗的結果 scores = cross_validation.cross_val_score(alg,titanic[predictors],titanic['Survived'],cv = kf) print(scores.mean())
0.7856341189674523
正確率略微提升。開發腦洞預備新加入一些特征如:家庭的規模=SibSp+Parch ,名字的全長,以及名字中間的身份稱呼。
#統計家庭規模為長輩與兄弟的總和 titanic["FamilySize"] = titanic['SibSp'] + titanic['Parch'] #統計船員名字字母總長度 titanic['NameLength'] = titanic['Name'].apply(lambda x : len(x))
使用正則匹配挑選出名字中間的身份人稱,進行數值編碼后,導入新特征‘title’
import re #使用正則表達 截取人名中的身份稱呼 def get_title(name): title_search = re.search('([A-Za-z]+)\.',name) if title_search: return title_search.group(1) return '' #以身份稱呼為分類 統計船員個數 titles = titanic['Name'].apply(get_title) print(pandas.value_counts(titles)) #將身份稱呼進行 數字編碼 title_mapping = {"Mr":1,"Miss":2,"Mrs":3,"Master":4,"Dr":5,"Rev":6,"Major":7,"Col":7,"Mlle":8,"Mme":8,"Don":9,"Lady":10,"Countess":10,"Jonkheer":10,"Str":9,"Capt":7,"Ms":2,"Sir":9} for k,v in title_mapping.items(): titles[titles==k]=v print(pandas.value_counts(titles)) #將轉化好的特征新增到數據集名稱Title中 titanic['Title'] = titles
導入SKlearn的特征選擇模塊,通過向訓練集中加入噪音數值,來判斷影響最大的特征
#導入特征選擇模塊 import numpy as np from sklearn import cross_validation from sklearn.feature_selection import SelectKBest,f_classif import matplotlib.pyplot as plt predictors = ["Pclass","Sex","Age","SibSp","Parch","Fare","Embarked","FamilySize","Title","NameLength"] #通過加入噪音值觀察 selector = SelectKBest(f_classif,k=5) selector.fit(titanic[predictors],titanic["Survived"]) scores=-np.log10(selector.pvalues_) #輸出柱狀圖 plt.bar(range(len(predictors)),scores) plt.xticks(range(len(predictors)),predictors,rotation='vertical') plt.show() #選取影響最大的特征作為新的特征集 predictors = ["Pclass","Sex","NameLength","Title","Fare""]
辛苦沒白費新加入的兩個特征影響果然較大,選取影響較大的五個個特征作為新的特征集,再次使用隨機森林模型,並調整參數
from sklearn import cross_validation from sklearn.ensemble import RandomForestClassifier #導入特征集合 predictors = ["Pclass","Sex","NameLength","Title","Fare"] #創建隨機森林 決策樹數量 為50個,停止條件為最小樹枝為4或最小葉子數為2 alg = RandomForestClassifier(random_state = 1,n_estimators = 50,min_samples_split = 4,min_samples_leaf = 10) #再進行一次交叉檢驗 kf =cross_validation.KFold(titanic.shape[0],n_folds = 3,random_state = 1) #進行模型評估 分類器為隨機森林,數據為船員特征,目標為生存率,參數為交叉檢驗的結果 scores = cross_validation.cross_val_score(alg,titanic[predictors],titanic['Survived'],cv = kf) print(scores.mean())
0.8159371492704826
正確率提高,再使用SKlearn的函數組合模塊將回歸與隨機森林的算法結合使用
from sklearn.ensemble import GradientBoostingClassifier import numpy as np from sklearn.linear_model import LogisticRegression algorithms = [ [GradientBoostingClassifier(random_state=1, n_estimators=25, max_depth=3), ["Pclass","Sex","NameLength","Title","Fare"]], [LogisticRegression(random_state=1), ["Pclass","Sex","NameLength","Title","Fare"]] ] kf = KFold(titanic.shape[0], n_folds=3, random_state=1) predictions = [] for train, test in kf: train_target = titanic["Survived"].iloc[train] full_test_predictions = [] for alg, predictors in algorithms: alg.fit(titanic[predictors].iloc[train,:], train_target) test_predictions = alg.predict_proba(titanic[predictors].iloc[test,:].astype(float))[:,1] full_test_predictions.append(test_predictions) test_predictions = (full_test_predictions[0] + full_test_predictions[1]) / 2 test_predictions[test_predictions <= .5] = 0 test_predictions[test_predictions > .5] = 1 predictions.append(test_predictions) predictions = np.concatenate(predictions, axis=0) accuracy = sum(predictions[predictions == titanic["Survived"]]) / len(predictions) print(accuracy)
0.821548821549
最終得出的最高正確率,