Python 數據分析基礎小結


一、數據讀取

1、讀寫數據庫數據

讀取函數:

  • pandas.read_sql_table(table_name, con, schema=None, index_col=None, coerce_float=True, columns=None)
  • pandas.read_sql_query(sql, con, index_col=None, coerce_float=True)
  • pandas.read_sql(sql, con, index_col=None, coerce_float=True, columns=None)
  • sqlalchemy.creat_engine('數據庫產品名+連接工具名://用戶名:密碼@數據庫IP地址:數據庫端口號/數據庫名稱?charset = 數據庫數據編碼')

寫出函數:

  • DataFrame.to_sql(name, con, schema=None, if_exists=’fail’, index=True, index_label=None, dtype=None)

 

2、讀寫文本文件/csv數據

讀取函數:

  • pandas.read_table(filepath_or_buffer, sep=’\t’, header=’infer’, names=None, index_col=None, dtype=None, engine=None, nrows=None)
  • pandas.read_csv(filepath_or_buffer, sep=’,’, header=’infer’, names=None, index_col=None, dtype=None, engine=None, nrows=None)

寫出函數:

  • DataFrame.to_csv(path_or_buf=None, sep=’,’, na_rep=”, columns=None, header=True, index=True,index_label=None,mode=’w’,encoding=None)

 

3、讀寫excel(xls/xlsx)數據

讀取函數:

  • pandas.read_excel(io, sheetname=0, header=0, index_col=None, names=None, dtype=None)

寫出函數:

  • DataFrame.to_excel(excel_writer=None, sheetname=None’, na_rep=”, header=True, index=True, index_label=None, mode=’w’, encoding=None)

 

4、讀取剪貼板數據:

  • pandas.read_clipboard()

 

二、數據預處理

1、數據清洗

重復數據處理

  1. 樣本重復:

    pandas.DataFrame(Series).drop_duplicates(self, subset=None, keep='first', inplace=False)

  2. 特征重復:

  • 通用
def FeatureEquals(df):
    dfEquals=pd.DataFrame([],columns=df.columns,index=df.columns)
    for i in df.columns:
       for j in df.columns:
           dfEquals.loc[i,j]=df.loc[:,i].equals(df.loc[:,j])
    return dfEquals
  • 數值型特征
def drop_features(data,way = 'pearson',assoRate = 1.0):
    '''
    此函數用於求取相似度大於assoRate的兩列中的一個,主要目的用於去除數值型特征的重復
    data:數據框,無默認
    assoRate:相似度,默認為1
    '''
    assoMat = data.corr(method = way)
    delCol = []
    length = len(assoMat)
    for i in range(length):
        for j in range(i+1,length):
            if assoMat.iloc[i,j] >= assoRate:
                delCol.append(assoMat.columns[j])
    return(delCol)

 

缺失值處理

識別缺失值

  • DataFrame.isnull()
  • DataFrame.notnull()
  • DataFrame.isna()
  • DataFrame.notna()

處理缺失值

  • 刪除:DataFrame.dropna(self, axis=0, how='any', thresh=None, subset=None, inplace=False)

  • 定值填補: DataFrame.fillna(value=None, method=None, axis=None, inplace=False, limit=None)

  • 插補: DataFrame.interpolate(method=’linear’, axis=0, limit=None, inplace=False,limit_direction=’forward’, limit_area=None, downcast=None,**kwargs)

     

異常值處理

  • 3σ原則
def outRange(Ser1):
    boolInd = (Ser1.mean()-3*Ser1.std()>Ser1) | (Ser1.mean()+3*Ser1.var()< Ser1)
    index = np.arange(Ser1.shape[0])[boolInd]
    outrange = Ser1.iloc[index]
    return outrange

注: 此方法只適用於正態分布

  • 箱線圖分析
def boxOutRange(Ser):
    '''
    Ser:進行異常值分析的DataFrame的某一列
    '''
    Low = Ser.quantile(0.25)-1.5*(Ser.quantile(0.75)-Ser.quantile(0.25))
    Up = Ser.quantile(0.75)+1.5*(Ser.quantile(0.75)-Ser.quantile(0.25))
    index = (Ser< Low) | (Ser>Up)
    Outlier = Ser.loc[index]
    return(Outlier)

2、合並數據

  • 數據堆疊:pandas.concat(objs, axis=0, join='outer', join_axes=None, ignore_index=False, keys=None, levels=None, names=None, verify_integrity=False, copy=True)
  • 主鍵合並:pandas.merge(left, right, how='inner', on=None, left_on=None, right_on=None, left_index=False, right_index=False, sort=False,suffixes=('_x', '_y'), copy=True, indicator=False)
  • 重疊合並:pandas.DataFrame.combine_first(self, other)

3、數據變換

  • 啞變量處理:pandas.get_dummies(data, prefix=None, prefix_sep='_', dummy_na=False, columns=None, sparse=False, drop_first=False)
  • 數據離散化:pandas.cut(x, bins, right=True, labels=None, retbins=False, precision=3, include_lowest=False)

4、數據標准化

  • 標准差標准化:sklearn.preprocessing.StandardScaler
  • 離差標准化: sklearn.preprocessing.MinMaxScaler

三、模型構建

1、訓練集測試集划分

sklearn.model_selection.train_test_split(*arrays, **options)

2、 降維

class sklearn.decomposition.PCA(n_components=None, copy=True, whiten=False, svd_solver=’auto’, tol=0.0, iterated_power=’auto’, random_state=None)

3、交叉驗證

sklearn.model_selection.cross_validate(estimator, X, y=None, groups=None, scoring=None, cv=None, n_jobs=1, verbose=0, fit_params=None, pre_dispatch=‘2*n_jobs’, return_train_score=’warn’)

4、模型訓練與預測

  • 有監督模型
clf = lr.fit(X_train, y_train)
clf.predict(X_test)

5、聚類

常用算法:

  • K均值:class sklearn.cluster.KMeans(n_clusters=8, init=’k-means++’, n_init=10, max_iter=300, tol=0.0001, precompute_distances=’auto’, verbose=0, random_state=None, copy_x=True, n_jobs=1, algorithm=’auto’)
  • DBSCAN密度聚類:class sklearn.cluster.DBSCAN(eps=0.5, min_samples=5, metric=’euclidean’, metric_params=None, algorithm=’auto’, leaf_size=30, p=None, n_jobs=1)
  • Birch層次聚類:class sklearn.cluster.Birch(threshold=0.5, branching_factor=50, n_clusters=3, compute_labels=True, copy=True)

評價:

  • 輪廓系數:sklearn.metrics.silhouette_score(X, labels, metric=’euclidean’, sample_size=None, random_state=None, **kwds)
  • calinski_harabaz_score:sklearn.metrics.calinski_harabaz_score(X, labels)
  • completeness_score:sklearn.metrics.completeness_score(labels_true, labels_pred)
  • fowlkes_mallows_score:sklearn.metrics.fowlkes_mallows_score(labels_true, labels_pred, sparse=False)
  • homogeneity_completeness_v_measure:sklearn.metrics.homogeneity_completeness_v_measure(labels_true, labels_pred)
  • adjusted_rand_score:sklearn.metrics.adjusted_rand_score(labels_true, labels_pred)
  • homogeneity_score:sklearn.metrics.homogeneity_score(labels_true, labels_pred)
  • mutual_info_score:sklearn.metrics.mutual_info_score(labels_true, labels_pred, contingency=None)
  • normalized_mutual_info_score:sklearn.metrics.normalized_mutual_info_score(labels_true, labels_pred)
  • v_measure_score:sklearn.metrics.v_measure_score(labels_true, labels_pred)

注:后續含labels_true參數的均需真實值參與

6、分類

常用算法

  • Adaboost分類:class sklearn.ensemble.AdaBoostClassifier(base_estimator=None, n_estimators=50, learning_rate=1.0, algorithm=’SAMME.R’, random_state=None)
  • 梯度提升樹分類:class sklearn.ensemble.GradientBoostingClassifier(loss=’deviance’, learning_rate=0.1, n_estimators=100, subsample=1.0, criterion=’friedman_mse’, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, verbose=0, max_leaf_nodes=None, warm_start=False, presort=’auto’)
  • 隨機森林分類:class sklearn.ensemble.RandomForestClassifier(n_estimators=10, criterion=’gini’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False, class_weight=None)
  • 高斯過程分類:class sklearn.gaussian_process.GaussianProcessClassifier(kernel=None, optimizer=’fmin_l_bfgs_b’, n_restarts_optimizer=0, max_iter_predict=100, warm_start=False, copy_X_train=True, random_state=None, multi_class=’one_vs_rest’, n_jobs=1)
  • 邏輯回歸:class sklearn.linear_model.LogisticRegression(penalty=’l2’, dual=False, tol=0.0001, C=1.0, fit_intercept=True, intercept_scaling=1, class_weight=None, random_state=None, solver=’liblinear’, max_iter=100, multi_class=’ovr’, verbose=0, warm_start=False, n_jobs=1)
  • KNN:class sklearn.neighbors.KNeighborsClassifier(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, metric_params=None, n_jobs=1, **kwargs)
  • 多層感知神經網絡:class sklearn.neural_network.MLPClassifier(hidden_layer_sizes=(100, ), activation=’relu’, solver=’adam’, alpha=0.0001, batch_size=’auto’, learning_rate=’constant’, learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
  • SVM:class sklearn.svm.SVC(C=1.0, kernel=’rbf’, degree=3, gamma=’auto’, coef0=0.0, shrinking=True, probability=False, tol=0.001, cache_size=200, class_weight=None, verbose=False, max_iter=-1, decision_function_shape=’ovr’, random_state=None)
  • 決策樹:class sklearn.tree.DecisionTreeClassifier(criterion=’gini’, splitter=’best’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, class_weight=None, presort=False)

評價:

  • 准確率:sklearn.metrics.accuracy_score(y_true, y_pred, normalize=True, sample_weight=None)
  • AUC:sklearn.metrics.auc(x, y, reorder=False)
  • 分類報告:sklearn.metrics.classification_report(y_true, y_pred, labels=None, target_names=None, sample_weight=None, digits=2)
  • 混淆矩陣:sklearn.metrics.confusion_matrix(y_true, y_pred, labels=None, sample_weight=None)
  • kappa:sklearn.metrics.cohen_kappa_score(y1, y2, labels=None, weights=None, sample_weight=None)
  • F1值:sklearn.metrics.f1_score(y_true, y_pred, labels=None, pos_label=1, average=’binary’, sample_weight=None)
  • 精確率:sklearn.metrics.precision_score(y_true, y_pred, labels=None, pos_label=1, average=’binary’, sample_weight=None)
  • 召回率:sklearn.metrics.recall_score(y_true, y_pred, labels=None, pos_label=1, average=’binary’, sample_weight=None)
  • ROC:sklearn.metrics.roc_curve(y_true, y_score, pos_label=None, sample_weight=None, drop_intermediate=True)

7、回歸

常用算法:

  • Adaboost回歸:class sklearn.ensemble.AdaBoostRegressor(base_estimator=None, n_estimators=50, learning_rate=1.0, loss=’linear’, random_state=None)
  • 梯度提升樹回歸:class sklearn.ensemble.GradientBoostingRegressor(loss=’ls’, learning_rate=0.1, n_estimators=100, subsample=1.0, criterion=’friedman_mse’, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_depth=3, min_impurity_decrease=0.0, min_impurity_split=None, init=None, random_state=None, max_features=None, alpha=0.9, verbose=0, max_leaf_nodes=None, warm_start=False, presort=’auto’)
  • 隨機森林回歸:class sklearn.ensemble.RandomForestRegressor(n_estimators=10, criterion=’mse’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=’auto’, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, bootstrap=True, oob_score=False, n_jobs=1, random_state=None, verbose=0, warm_start=False)
  • 高斯過程回歸:class sklearn.gaussian_process.GaussianProcessRegressor(kernel=None, alpha=1e-10, optimizer=’fmin_l_bfgs_b’, n_restarts_optimizer=0, normalize_y=False, copy_X_train=True, random_state=None)
  • 保序回歸:class sklearn.isotonic.IsotonicRegression(y_min=None, y_max=None, increasing=True, out_of_bounds=’nan’)
  • Lasso回歸:class sklearn.linear_model.Lasso(alpha=1.0, fit_intercept=True, normalize=False, precompute=False, copy_X=True, max_iter=1000, tol=0.0001, warm_start=False, positive=False, random_state=None, selection=’cyclic’)
  • 線性回歸:class sklearn.linear_model.LinearRegression(fit_intercept=True, normalize=False, copy_X=True, n_jobs=1)
  • 嶺回歸: class sklearn.linear_model.Ridge(alpha=1.0, fit_intercept=True, normalize=False, copy_X=True, max_iter=None, tol=0.001, solver=’auto’, random_state=None)
  • KNN回歸:class sklearn.neighbors.KNeighborsRegressor(n_neighbors=5, weights=’uniform’, algorithm=’auto’, leaf_size=30, p=2, metric=’minkowski’, metric_params=None, n_jobs=1, **kwargs)
  • 多層感知神經網絡回歸:class sklearn.neural_network.MLPRegressor(hidden_layer_sizes=(100, ), activation=’relu’, solver=’adam’, alpha=0.0001, batch_size=’auto’, learning_rate=’constant’, learning_rate_init=0.001, power_t=0.5, max_iter=200, shuffle=True, random_state=None, tol=0.0001, verbose=False, warm_start=False, momentum=0.9, nesterovs_momentum=True, early_stopping=False, validation_fraction=0.1, beta_1=0.9, beta_2=0.999, epsilon=1e-08)
  • SVM回歸:class sklearn.svm.SVR(kernel=’rbf’, degree=3, gamma=’auto’, coef0=0.0, tol=0.001, C=1.0, epsilon=0.1, shrinking=True, cache_size=200, verbose=False, max_iter=-1)
  • 決策樹回歸:class sklearn.tree.DecisionTreeRegressor(criterion=’mse’, splitter=’best’, max_depth=None, min_samples_split=2, min_samples_leaf=1, min_weight_fraction_leaf=0.0, max_features=None, random_state=None, max_leaf_nodes=None, min_impurity_decrease=0.0, min_impurity_split=None, presort=False)

評價:

  • 可解釋方差值:sklearn.metrics.explained_variance_score(y_true, y_pred, sample_weight=None, multioutput=’uniform_average’)
  • 平均絕對誤差:sklearn.metrics.mean_absolute_error(y_true, y_pred, sample_weight=None, multioutput=’uniform_average’)[source]
  • 均方誤差:sklearn.metrics.mean_squared_error(y_true, y_pred, sample_weight=None, multioutput=’uniform_average’)
  • 均方對數誤差:sklearn.metrics.mean_squared_log_error(y_true, y_pred, sample_weight=None, multioutput=’uniform_average’)
  • 中值絕對誤差:sklearn.metrics.median_absolute_error(y_true, y_pred)
  • R²值:sklearn.metrics.r2_score(y_true, y_pred, sample_weight=None, multioutput=’uniform_average’)

八、demo

 from sklearn import neighbors, datasets, preprocessing
 from sklearn.cross_validation import train_test_split
 from sklearn.metrics import accuracy_score
 iris = datasets.load_iris()
 X, y = iris.data, iris.target
 X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=33)
 scaler = preprocessing.StandardScaler().fit(X_train)
 X_train = scaler.transform(X_train)
 X_test = scaler.transform(X_test)
 knn = neighbors.KNeighborsClassifier(n_neighbors=5)
 knn.fit(X_train, y_train)
 y_pred = knn.predict(X_test)
 accuracy_score(y_test, y_pred)

四、繪圖

1、創建畫布或子圖

函數名稱 函數作用
plt.figure 創建一個空白畫布,可以指定畫布大小,像素。
figure.add_subplot 創建並選中子圖,可以指定子圖的行數,列數,與選中圖片編號。

2、繪制

函數名稱 函數作用
plt.title 在當前圖形中添加標題,可以指定標題的名稱,位置,顏色,字體大小等參數。
plt.xlabel 在當前圖形中添加x軸名稱,可以指定位置,顏色,字體大小等參數。
plt.ylabel 在當前圖形中添加y軸名稱,可以指定位置,顏色,字體大小等參數。
plt.xlim 指定當前圖形x軸的范圍,只能確定一個數值區間,而無法使用字符串標識。
plt.ylim 指定當前圖形y軸的范圍,只能確定一個數值區間,而無法使用字符串標識。
plt.xticks 指定x軸刻度的數目與取值
plt.yticks 指定y軸刻度的數目與取值
plt.legend 指定當前圖形的圖例,可以指定圖例的大小,位置,標簽。

3、中文

plt.rcParams['font.sans-serif'] = 'SimHei' ##設置字體為SimHei顯示中文
plt.rcParams['axes.unicode_minus'] = False ##設置正常顯示符號

4、不同圖形

  • 散點圖:matplotlib.pyplot.scatter(x, y, s=None, c=None, marker=None, cmap=None, norm=None, vmin=None, vmax=None, alpha=None, linewidths=None, verts=None, edgecolors=None, hold=None, data=None,**kwargs)
  • 折線圖: matplotlib.pyplot.plot(*args, **kwargs)
  • 直方圖:matplotlib.pyplot.bar(left,height,width = 0.8,bottom = None,hold = None,data = None,** kwargs )
  • 餅圖:matplotlib.pyplot.pie(x, explode=None, labels=None, colors=None, autopct=None, pctdistance=0.6, shadow=False, labeldistance=1.1, startangle=None, radius=None, counterclock=True, wedgeprops=None, textprops=None, center=(0, 0), frame=False, hold=None, data=None)
  • 箱線圖:matplotlib.pyplot.boxplot(x, notch=None, sym=None, vert=None, whis=None, positions=None, widths=None, patch_artist=None, bootstrap=None, usermedians=None, conf_intervals=None, meanline=None, showmeans=None, showcaps=None, showbox=None, showfliers=None, boxprops=None, labels=None, flierprops=None, medianprops=None, meanprops=None, capprops=None, whiskerprops=None, manage_xticks=True, autorange=False, zorder=None, hold=None, data=None)

5、Demo

import numpy as np
import matplotlib.pyplot as plt

box = dict(facecolor='yellow', pad=5, alpha=0.2)

fig, ((ax1, ax2), (ax3, ax4)) = plt.subplots(2, 2)
fig.subplots_adjust(left=0.2, wspace=0.6)

# Fixing random state for reproducibility
np.random.seed(19680801)

ax1.plot(2000*np.random.rand(10))
ax1.set_title('ylabels not aligned')
ax1.set_ylabel('misaligned 1', bbox=box)
ax1.set_ylim(0, 2000)

ax3.set_ylabel('misaligned 2',bbox=box)
ax3.plot(np.random.rand(10))

labelx = -0.3  # axes coords

ax2.set_title('ylabels aligned')
ax2.plot(2000*np.random.rand(10))
ax2.set_ylabel('aligned 1', bbox=box)
ax2.yaxis.set_label_coords(labelx, 0.5)
ax2.set_ylim(0, 2000)

ax4.plot(np.random.rand(10))
ax4.set_ylabel('aligned 2', bbox=box)
ax4.yaxis.set_label_coords(labelx, 0.5)

plt.show()

五、完整Demo

import numpy as np
import pandas as pd
airline_data = pd.read_csv("../data/air_data.csv",
    encoding="gb18030") #導入航空數據
print('原始數據的形狀為:',airline_data.shape)

## 去除票價為空的記錄
exp1 = airline_data["SUM_YR_1"].notnull()
exp2 = airline_data["SUM_YR_2"].notnull()
exp = exp1 & exp2
airline_notnull = airline_data.loc[exp,:]
print('刪除缺失記錄后數據的形狀為:',airline_notnull.shape)

#只保留票價非零的,或者平均折扣率不為0且總飛行公里數大於0的記錄。
index1 = airline_notnull['SUM_YR_1'] != 0
index2 = airline_notnull['SUM_YR_2'] != 0
index3 = (airline_notnull['SEG_KM_SUM']> 0) & \
    (airline_notnull['avg_discount'] != 0)  
airline = airline_notnull[(index1 | index2) & index3]
print('刪除異常記錄后數據的形狀為:',airline.shape)

airline_selection = airline[["FFP_DATE","LOAD_TIME",
    "FLIGHT_COUNT","LAST_TO_END",
    "avg_discount","SEG_KM_SUM"]]
## 構建L特征
L = pd.to_datetime(airline_selection["LOAD_TIME"]) - \
pd.to_datetime(airline_selection["FFP_DATE"])
L = L.astype("str").str.split().str[0]
L = L.astype("int")/30
## 合並特征
airline_features = pd.concat([L,
    airline_selection.iloc[:,2:]],axis = 1)
print('構建的LRFMC特征前5行為:\n',airline_features.head())

from sklearn.preprocessing import StandardScaler
data = StandardScaler().fit_transform(airline_features)
np.savez('../tmp/airline_scale.npz',data)
print('標准化后LRFMC五個特征為:\n',data[:5,:])

from sklearn.cluster import KMeans #導入kmeans算法
airline_scale = np.load('../tmp/airline_scale.npz')['arr_0']
k = 5 ## 確定聚類中心數

#構建模型
kmeans_model = KMeans(n_clusters = k,n_jobs=4,random_state=123)
fit_kmeans = kmeans_model.fit(airline_scale)   #模型訓練
kmeans_model.cluster_centers_ #查看聚類中心

kmeans_model.labels_ #查看樣本的類別標簽

#統計不同類別樣本的數目
r1 = pd.Series(kmeans_model.labels_).value_counts()
print('最終每個類別的數目為:\n',r1)

#繪制直方圖矩陣
center = kmeans_model.cluster_centers_ 
names = ['入會時長','最近乘坐過本公司航班','乘坐次數','里程','平均折扣率']
import matplotlib.pyplot as plt
%matplotlib inline
ax = plt.figure(figsize=(8,8))
for i in range(k):
    ax1 = ax.add_subplot(k,1,i+1)
    plt.bar(range(5),center[:,i],width = 0.5)
    plt.xlabel('類別')
    plt.ylabel(names[i])
plt.savefig('聚類分析柱形圖.png')
plt.show()

#繪制雷達圖
fig = plt.figure(figsize=(8,8))
ax = fig.add_subplot(111, polar=True)# polar參數
angles = np.linspace(0, 2*np.pi, k, endpoint=False)
angles = np.concatenate((angles, [angles[0]])) # 閉合
Linecolor = ['bo-','r+:','gD--','yv-.','kp-'] #點線顏色
Fillcolor = ['b','r','g','y','k']
for i in range(k):
    data = np.concatenate((center[i], [center[i][0]])) # 閉合
    ax.plot(angles,data,Linecolor[i], linewidth=2)# 畫線
    ax.fill(angles, data, facecolor=Fillcolor[i], alpha=0.25)# 填充
ax.set_thetagrids(angles * 180/np.pi, names)
ax.set_title("客戶分群雷達圖", va='bottom')## 設定標題
ax.set_rlim(-1,3)## 設置各指標的最終范圍
ax.grid(True)


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM