數據規整化:合並、清理、過濾
pandas和python標准庫提供了一整套高級、靈活的、高效的核心函數和算法將數據規整化為你想要的形式!
本篇博客主要介紹:
合並數據集:.merge()、.concat()等方法,類似於SQL或其他關系型數據庫的連接操作。
合並數據集
1) merge 函數參數
參數 |
說明 |
left |
參與合並的左側DataFrame |
right |
參與合並的右側DataFrame |
how |
連接方式:‘inner’(默認);還有,‘outer’、‘left’、‘right’ |
on |
用於連接的列名,必須同時存在於左右兩個DataFrame對象中,如果位指定,則以left和right列名的交集作為連接鍵 |
left_on |
左側DataFarme中用作連接鍵的列 |
right_on |
右側DataFarme中用作連接鍵的列 |
left_index |
將左側的行索引用作其連接鍵 |
right_index |
將右側的行索引用作其連接鍵 |
sort |
根據連接鍵對合並后的數據進行排序,默認為True。有時在處理大數據集時,禁用該選項可獲得更好的性能 |
suffixes |
字符串值元組,用於追加到重疊列名的末尾,默認為(‘_x’,‘_y’).例如,左右兩個DataFrame對象都有‘data’,則結果中就會出現‘data_x’,‘data_y’ |
copy |
設置為False,可以在某些特殊情況下避免將數據復制到結果數據結構中。默認總是賦值 |
1、多對一的合並(一個表的連接鍵列有重復值,另一個表中的連接鍵沒有重復值)
import pandas as pd import numpy as np df1 = pd.DataFrame({'key':['b','b','a','c','a','a','b'],'data1': range(7)}) df1
|
data1 |
key |
0 |
0 |
b |
1 |
1 |
b |
2 |
2 |
a |
3 |
3 |
c |
4 |
4 |
a |
5 |
5 |
a |
6 |
6 |
b |
df2 = pd.DataFrame({'key':['a','b','d'],'data2':range(3)}) df2
|
data2 |
key |
0 |
0 |
a |
1 |
1 |
b |
2 |
2 |
d |
pd.merge(df1,df2)
|
data1 |
key |
data2 |
0 |
0 |
b |
1 |
1 |
1 |
b |
1 |
2 |
6 |
b |
1 |
3 |
2 |
a |
0 |
4 |
4 |
a |
0 |
5 |
5 |
a |
0 |
df1.merge(df2)
|
data1 |
key |
data2 |
0 |
0 |
b |
1 |
1 |
1 |
b |
1 |
2 |
6 |
b |
1 |
3 |
2 |
a |
0 |
4 |
4 |
a |
0 |
5 |
5 |
a |
0 |
df1.merge(df2,on = 'key',how = 'inner')
|
data1 |
key |
data2 |
0 |
0 |
b |
1 |
1 |
1 |
b |
1 |
2 |
6 |
b |
1 |
3 |
2 |
a |
0 |
4 |
4 |
a |
0 |
5 |
5 |
a |
0 |
df1.merge(df2,on = 'key',how = 'outer')
|
data1 |
key |
data2 |
0 |
0.0 |
b |
1.0 |
1 |
1.0 |
b |
1.0 |
2 |
6.0 |
b |
1.0 |
3 |
2.0 |
a |
0.0 |
4 |
4.0 |
a |
0.0 |
5 |
5.0 |
a |
0.0 |
6 |
3.0 |
c |
NaN |
7 |
NaN |
d |
2.0 |
df1.merge(df2,on = 'key',how = 'left')
|
data1 |
key |
data2 |
0 |
0 |
b |
1.0 |
1 |
1 |
b |
1.0 |
2 |
2 |
a |
0.0 |
3 |
3 |
c |
NaN |
4 |
4 |
a |
0.0 |
5 |
5 |
a |
0.0 |
6 |
6 |
b |
1.0 |
df1.merge(df2,on = 'key',how = 'right')
|
data1 |
key |
data2 |
0 |
0.0 |
b |
1 |
1 |
1.0 |
b |
1 |
2 |
6.0 |
b |
1 |
3 |
2.0 |
a |
0 |
4 |
4.0 |
a |
0 |
5 |
5.0 |
a |
0 |
6 |
NaN |
d |
2 |
如果左右側DataFrame的連接鍵列名不一致,但是取值有重疊,可使用left_on、right_on來指定左右連接鍵
df3 = pd.DataFrame({'lkey':['b','b','a','c','a','a','b'],'data1': range(7)}) df3
|
data1 |
lkey |
0 |
0 |
b |
1 |
1 |
b |
2 |
2 |
a |
3 |
3 |
c |
4 |
4 |
a |
5 |
5 |
a |
6 |
6 |
b |
df4 = pd.DataFrame({'rkey':['a','b','d'],'data2':range(3)}) df4
|
data2 |
rkey |
0 |
0 |
a |
1 |
1 |
b |
2 |
2 |
d |
df3.merge(df4,left_on = 'lkey',right_on = 'rkey',how = 'inner')
|
data1 |
lkey |
data2 |
rkey |
0 |
0 |
b |
1 |
b |
1 |
1 |
b |
1 |
b |
2 |
6 |
b |
1 |
b |
3 |
2 |
a |
0 |
a |
4 |
4 |
a |
0 |
a |
5 |
5 |
a |
0 |
a |
2、多對多的合並(一個表的連接鍵列有重復值,另一個表中的連接鍵有重復值)
df1 = pd.DataFrame({'key':['b','b','a','c','a','a','b'],'data1': range(7)}) df1
|
data1 |
key |
0 |
0 |
b |
1 |
1 |
b |
2 |
2 |
a |
3 |
3 |
c |
4 |
4 |
a |
5 |
5 |
a |
6 |
6 |
b |
df5 = pd.DataFrame({'key':['a','b','a','b','b'],'data2': range(5)}) df5
|
data2 |
key |
0 |
0 |
a |
1 |
1 |
b |
2 |
2 |
a |
3 |
3 |
b |
4 |
4 |
b |
df1.merge(df5)
|
data1 |
key |
data2 |
0 |
0 |
b |
1 |
1 |
0 |
b |
3 |
2 |
0 |
b |
4 |
3 |
1 |
b |
1 |
4 |
1 |
b |
3 |
5 |
1 |
b |
4 |
6 |
6 |
b |
1 |
7 |
6 |
b |
3 |
8 |
6 |
b |
4 |
9 |
2 |
a |
0 |
10 |
2 |
a |
2 |
11 |
4 |
a |
0 |
12 |
4 |
a |
2 |
13 |
5 |
a |
0 |
14 |
5 |
a |
2 |
合並小結
1)默認情況下,會將兩個表中相同列名作為連接鍵
2)多對多,會采用笛卡爾積形式鏈接(左表連接鍵有三個值‘1,3,5’,右表有兩個值‘2,3’,則會形成,(1,2)(1,3)(3,1),(3,2)。。。6種組合)
3)存在多個連接鍵的處理
left = pd.DataFrame({'key1':['foo','foo','bar'],'key2':['one','one','two'],'lval':[1,2,3]}) right = pd.DataFrame({'key1':['foo','foo','bar','bar'],'key2':['one','one','one','two'],'rval':[4,5,6,7]})
left
|
key1 |
key2 |
lval |
0 |
foo |
one |
1 |
1 |
foo |
one |
2 |
2 |
bar |
two |
3 |
right
|
key1 |
key2 |
rval |
0 |
foo |
one |
4 |
1 |
foo |
one |
5 |
2 |
bar |
one |
6 |
3 |
bar |
two |
7 |
pd.merge(left,right,on = ['key1','key2'],how = 'outer')
|
key1 |
key2 |
lval |
rval |
0 |
foo |
one |
1.0 |
4 |
1 |
foo |
one |
1.0 |
5 |
2 |
foo |
one |
2.0 |
4 |
3 |
foo |
one |
2.0 |
5 |
4 |
bar |
two |
3.0 |
7 |
5 |
bar |
one |
NaN |
6 |
1)連接鍵是多對多關系,應執行笛卡爾積形式
2)多列應看連接鍵值對是否一致
4)對連接表中非連接列的重復列名的處理
pd.merge(left,right,on = 'key1')
|
key1 |
key2_x |
lval |
key2_y |
rval |
0 |
foo |
one |
1 |
one |
4 |
1 |
foo |
one |
1 |
one |
5 |
2 |
foo |
one |
2 |
one |
4 |
3 |
foo |
one |
2 |
one |
5 |
4 |
bar |
two |
3 |
one |
6 |
5 |
bar |
two |
3 |
two |
7 |
pd.merge(left,right,on = 'key1',suffixes = ('_left','_right'))
|
key1 |
key2_left |
lval |
key2_right |
rval |
0 |
foo |
one |
1 |
one |
4 |
1 |
foo |
one |
1 |
one |
5 |
2 |
foo |
one |
2 |
one |
4 |
3 |
foo |
one |
2 |
one |
5 |
4 |
bar |
two |
3 |
one |
6 |
5 |
bar |
two |
3 |
two |
7 |
2)索引上的合並
當連接鍵位於索引中時,成為索引上的合並,可以通過merge函數,傳入left_index、right_index來說明應該被索引的情況。
- 一表中連接鍵是索引列、另一表連接鍵是非索引列
left1 = pd.DataFrame({'key':['a','b','a','a','b','c'],'value': range(6)}) left1
|
key |
value |
0 |
a |
0 |
1 |
b |
1 |
2 |
a |
2 |
3 |
a |
3 |
4 |
b |
4 |
5 |
c |
5 |
right1 = pd.DataFrame({'group_val':[3.5,7]},index = ['a','b']) right1
pd.merge(left1,right1,left_on = 'key',right_index = True)
|
key |
value |
group_val |
0 |
a |
0 |
3.5 |
2 |
a |
2 |
3.5 |
3 |
a |
3 |
3.5 |
1 |
b |
1 |
7.0 |
4 |
b |
4 |
7.0 |
有上可知,left_on、right_on是指定表中非索引列為連接鍵,left_index、right_index是指定表中索引列為連接鍵,兩者可以組合,是為了區分是否是索引列
- 兩個表中的索引列都是連接鍵
left2 = pd.DataFrame(np.arange(6).reshape(3,2),index = ['a','b','e'],columns = ['0hio','nevada']) right2 = pd.DataFrame(np.arange(7,15).reshape(4,2),index = ['b','c','d','e'],columns = ['misso','ala']) left2
|
0hio |
nevada |
a |
0 |
1 |
b |
2 |
3 |
e |
4 |
5 |
right2
|
misso |
ala |
b |
7 |
8 |
c |
9 |
10 |
d |
11 |
12 |
e |
13 |
14 |
pd.merge(left2,right2,left_index = True,right_index = True,how = 'outer')
|
0hio |
nevada |
misso |
ala |
a |
0.0 |
1.0 |
NaN |
NaN |
b |
2.0 |
3.0 |
7.0 |
8.0 |
c |
NaN |
NaN |
9.0 |
10.0 |
d |
NaN |
NaN |
11.0 |
12.0 |
e |
4.0 |
5.0 |
13.0 |
14.0 |
3)軸向連接
在這里展示一種新的連接方法,對應於numpy的concatenate函數,pandas有concat函數
arr
array([[ 0, 1, 2, 3], [ 4, 5, 6, 7], [ 8, 9, 10, 11]])
np.concatenate([arr,arr],axis = 1)
array([[ 0, 1, 2, 3, 0, 1, 2, 3], [ 4, 5, 6, 7, 4, 5, 6, 7], [ 8, 9, 10, 11, 8, 9, 10, 11]])
concat函數參數表格
參數 |
說明 |
objs |
參與連接的列表或字典,且列表或字典里的對象是pandas數據類型,唯一必須給定的參數 |
axis=0 |
指明連接的軸向,0是縱軸,1是橫軸,默認是0 |
join |
‘inner’(交集),‘outer’(並集),默認是‘outer’指明軸向索引的索引是交集還是並集 |
join_axis |
指明用於其他n-1條軸的索引(層次化索引,某個軸向有多個索引),不執行交並集 |
keys |
與連接對象有關的值,用於形成連接軸向上的層次化索引(外層索引),可以是任意值的列表或數組、元組數據、數組列表(如果將levels設置成多級數組的話) |
levels |
指定用作層次化索引各級別(內層索引)上的索引,如果設置keys的話 |
names |
用於創建分層級別的名稱,如果設置keys或levels的話 |
verify_integrity |
檢查結果對象新軸上的重復情況,如果發橫則引發異常,默認False,允許重復 |
ignore_index |
不保留連接軸上的索引,產生一組新索引range(total_length) |
s1 = pd.Series([0,1,2],index = ['a','b','c']) s2 = pd.Series([2,3,4],index = ['c','f','e']) s3 = pd.Series([4,5,6],index = ['c','f','g'])
pd.concat([s1,s2,s3])
a 0 b 1 c 2 c 2 f 3 e 4 c 4 f 5 g 6 dtype: int64
pd.concat([s1,s2,s3],ignore_index = True)
0 0 1 1 2 2 3 2 4 3 5 4 6 4 7 5 8 6 dtype: int64
pd.concat([s1,s2,s3],axis = 1,join = 'inner')
pd.concat([s1,s2,s3],axis = 1,join = 'outer')
|
0 |
1 |
2 |
a |
0.0 |
NaN |
NaN |
b |
1.0 |
NaN |
NaN |
c |
2.0 |
2.0 |
4.0 |
e |
NaN |
4.0 |
NaN |
f |
NaN |
3.0 |
5.0 |
g |
NaN |
NaN |
6.0 |
concat函數小結
1)縱向連接,ignore_index = False ,可能生成重復的索引
2)橫向連接時,對象索引不能重復
4)合並重疊數據
適用范圍:
1)當兩個對象的索引有部分或全部重疊時
2)用參數對象中的數據為調用者對象的缺失數據‘打補丁’
a = pd.Series([np.nan,2.5,np.nan,3.5,4.5,np.nan],index = ['a','b','c','d','e','f']) b = pd.Series(np.arange(len(a)),index = ['a','b','c','d','e','f'])
a
a NaN
b 2.5
c NaN
d 3.5
e 4.5
f NaN
dtype: float64
b
a 0
b 1
c 2
d 3
e 4
f 5
dtype: int32
a.combine_first(b)
a 0.0
b 2.5
c 2.0
d 3.5
e 4.5
f 5.0
dtype: float64
a = pd.Series([np.nan,2.5,np.nan,3.5,4.5,np.nan],index = ['g','b','c','d','e','f'])
a.combine_first(b)
a 0.0
b 2.5
c 2.0
d 3.5
e 4.5
f 5.0
g NaN
dtype: float64
小結
本篇博客主要講述了一下內容:
1) merge函數合並數據集
2)concat函數合並數據集
3)combine_first函數,含有重疊索引的缺失值填補