自2005年開始,python在金融行業中的應用越來越多,這主要得益於越來越成熟的函數庫(NumPy和pandas)以及大量經驗豐富的程序員。許多機構發現python不僅非常適合成為交互式的分析環境,也非常適合開發文件的系統,所需的時間也比Java或C++少得多。Python還是一種非常好的粘合層,可以非常輕松為C或C++編寫的庫構建Python接口。
金融分析領域的內容博大精深。在數據規整化方面所花費的精力常常會比解決核心建模和研究問題所花費的時間多得多。
在本章中,術語截面(cross-section)來表示某個時間點的數據。例如標普500指數中所有成份股在特定日期的收盤價就形成了一個截面。多個數據在多個時間點的截面數據就構成了一個面板(panel)。面板數據既可以表示為層次化索引的DataFrame,也可以表示為三維的Panel pandas對象。
1、數據規整化方面的話題
時間序列以及截面對齊
處理金融數據時,最費神的一個問題就是所謂的數據對齊(data alignment)。兩個時間序列的索引可能沒有很好的對齊,或者兩個DataFrame對象可能含有不匹配的行或者列。MATLAB、R用戶通常會耗費大量的時間來進行數據對對齊工作(確實如此)。
pandas可以在運算中自動對齊數據。這是極好的,會提高效率。
-
時間序列以及截面對齊
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time prices = pd.read_csv('E:\\stock_px.csv',parse_dates = True,index_col = 0) volume = pd.read_csv('E:\\volume.csv',parse_dates = True,index_col = 0) prices = prices.ix['2011-09-06':'2011-09-14',['AAPL','JNJ','SPX','XOM']] volume = volume.ix['2011-09-06':'2011-09-12',['AAPL','JNJ','XOM']] print prices print volume,'\n' #如果想計算一個基於成交量的加權平均價,只需要做下面的事即可 vwap = (prices * volume).sum() / volume.sum() #sum函數自動忽略NaN值 print vwap,'\n' print vwap.dropna(),'\n' #可以使用DataFrame的align方法將DataFrame顯示地對齊 print prices.align(volume,join = 'inner') #另一個不可或缺的功能是,通過一組索引可能不同的Series構建DataFrame s1 = Series(range(3),index = ['a','b','c']) s2 = Series(range(4),index = ['d','b','c','e']) s3 = Series(range(3),index = ['f','a','c']) data = DataFrame({'one':s1,'two':s2,'three':s3}) print data >>> AAPL JNJ SPX XOM 2011-09-06 379.74 64.64 1165.24 71.15 2011-09-07 383.93 65.43 1198.62 73.65 2011-09-08 384.14 64.95 1185.90 72.82 2011-09-09 377.48 63.64 1154.23 71.01 2011-09-12 379.94 63.59 1162.27 71.84 2011-09-13 384.62 63.61 1172.87 71.65 2011-09-14 389.30 63.73 1188.68 72.64 AAPL JNJ XOM 2011-09-06 18173500 15848300 25416300 2011-09-07 12492000 10759700 23108400 2011-09-08 14839800 15551500 22434800 2011-09-09 20171900 17008200 27969100 2011-09-12 16697300 13448200 26205800 AAPL 380.655181 JNJ 64.394769 SPX NaN XOM 72.024288 AAPL 380.655181 JNJ 64.394769 XOM 72.024288 ( AAPL JNJ XOM 2011-09-06 379.74 64.64 71.15 2011-09-07 383.93 65.43 73.65 2011-09-08 384.14 64.95 72.82 2011-09-09 377.48 63.64 71.01 2011-09-12 379.94 63.59 71.84, AAPL JNJ XOM 2011-09-06 18173500 15848300 25416300 2011-09-07 12492000 10759700 23108400 2011-09-08 14839800 15551500 22434800 2011-09-09 20171900 17008200 27969100 2011-09-12 16697300 13448200 26205800) one three two a 0 1 NaN b 1 NaN 1 c 2 2 2 d NaN NaN 0 e NaN NaN 3 f NaN 0 NaN [Finished in 2.8s]
-
頻率不同的時間按序列的運算
經濟學時間序列常常按年月日等頻率進行數據統計。
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time ts1 = Series(np.random.randn(3),index = pd.date_range('2012-6-13',periods = 3,freq = 'W-WED')) print ts1 #如果重采樣到工作日,就會有缺省值出現 print ts1.resample('B') print ts1.resample('B',fill_method = 'ffill'),'\n' #下面看一種不規則時間的序列 dates = pd.DatetimeIndex(['2012-6-12','2012-6-17','2012-6-18','2012-6-21','2012-6-22','2012-6-29']) ts2 = Series(np.random.randn(6),index = dates) print ts2,'\n' #如果想將處理過后的ts1加到ts2上,可以先將兩個頻率弄相同再相加,但是要想維持ts2的reindex,則用reindex就好 print ts1.reindex(ts2.index,method = 'ffill'),'\n' print ts2 + ts1.reindex(ts2.index,method = 'ffill'),'\n' >>> 2012-06-13 -0.855102 2012-06-20 -1.242206 2012-06-27 0.380710 Freq: W-WED 2012-06-13 -0.855102 2012-06-14 NaN 2012-06-15 NaN 2012-06-18 NaN 2012-06-19 NaN 2012-06-20 -1.242206 2012-06-21 NaN 2012-06-22 NaN 2012-06-25 NaN 2012-06-26 NaN 2012-06-27 0.380710 Freq: B 2012-06-13 -0.855102 2012-06-14 -0.855102 2012-06-15 -0.855102 2012-06-18 -0.855102 2012-06-19 -0.855102 2012-06-20 -1.242206 2012-06-21 -1.242206 2012-06-22 -1.242206 2012-06-25 -1.242206 2012-06-26 -1.242206 2012-06-27 0.380710 Freq: B 2012-06-12 -1.248346 2012-06-17 0.833907 2012-06-18 0.235492 2012-06-21 -1.172378 2012-06-22 -0.111804 2012-06-29 -0.458527 2012-06-12 NaN 2012-06-17 -0.855102 2012-06-18 -0.855102 2012-06-21 -1.242206 2012-06-22 -1.242206 2012-06-29 0.380710 2012-06-12 NaN 2012-06-17 -0.021195 2012-06-18 -0.619610 2012-06-21 -2.414584 2012-06-22 -1.354010 2012-06-29 -0.077817 [Finished in 1.7s]
- 使用Period
Period是一種好工具,尤其適合於處理特殊規范的以年或者季度為頻率的金融或經濟序列。比如,一個公司可能會發布其以6月結尾的財年的每季度盈利報告,即頻率為Q-JUN。來看兩個例子:
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time gdp = Series([1.78,1.94,2.08,2.01,2.15,2.31,2.46],index = pd.period_range('1984Q2',periods = 7,freq = 'Q-SEP')) print gdp,'\n' infl = Series([0.025,0.045,0.037,0.04],index = pd.period_range('1982',periods = 4,freq = 'A-DEC')) print infl,'\n' #跟Timestamp時間序列不同的是,由period索引的不同時間序列之間的轉換必須經過顯示轉換 #轉換為以九月份為一年結束,以季度為頻率的序列,end就是說:這一年里面最后一個季度的名字 infl_q = infl.asfreq('Q-SEP',how = 'end') print infl.asfreq('Q-SEP',how = 'start'),'\n' #看一下以start開頭 print infl_q,'\n' #顯示轉換為以后就可以被重新索引了 print infl_q.reindex(gdp.index,method = 'ffill') >>> 1984Q2 1.78 1984Q3 1.94 1984Q4 2.08 1985Q1 2.01 1985Q2 2.15 1985Q3 2.31 1985Q4 2.46 Freq: Q-SEP 1982 0.025 1983 0.045 1984 0.037 1985 0.040 Freq: A-DEC 1982Q2 0.025 1983Q2 0.045 1984Q2 0.037 1985Q2 0.040 Freq: Q-SEP 1983Q1 0.025 1984Q1 0.045 1985Q1 0.037 1986Q1 0.040 Freq: Q-SEP 1984Q2 0.045 1984Q3 0.045 1984Q4 0.045 1985Q1 0.037 1985Q2 0.037 1985Q3 0.037 1985Q4 0.037 Freq: Q-SEP [Finished in 1.4s]
- 時間和“最當前”數據選取
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse from datetime import time #假設有一個很長的盤中數據,現在希望抽取其中的一些,如果數據不規整該怎么辦? rng = pd.date_range('2012-06-01 09:30','2012-06-01 15:59',freq = 'T') print type(rng) #這種類型可以append #注意下面的組做法,通過時間的偏移得到更多數據 rng = rng.append([rng + pd.offsets.BDay(i) for i in range(1,4)]) print rng,'\n' ts = Series(np.arange(len(rng),dtype = float),index = rng) print ts,'\n' print time(10,0) #這就是10點 print ts[time(10,0)],'\n' #只取10點鍾的數據 #該操作實際上用了實例方法at_time(各時間序列以及類似的DataFrame對象都有) print ts.at_time(time(10,0)),'\n' #當然還會有between_time來選取兩個Time對象之間的值 print ts.between_time(time(10,0),time(10,1)),'\n' #可是可能剛好就沒有任何數據落在某個具體的時間上(比如上午10點)。這時,可能會希望得到上午10點之前最后出現的值 #下面將該時間序列的大部分內容隨機設置為NA indexer = np.sort(np.random.permutation(len(ts))[700:]) irr_ts = ts.copy() irr_ts[indexer] = np.nan print irr_ts['2012-06-01 09:50':'2012-06-01 10:00'],'\n' #如果將一組Timestamp傳入asof方法,就能得到這些時間點處(或其之前最近)的有效值(非NA)。例如,構造一個日期范圍(每天上午10點),然后將其傳入asof: selection = pd.date_range('2012-06-01 10:00',periods = 4,freq = 'B') print irr_ts.asof(selection) >>> <class 'pandas.tseries.index.DatetimeIndex'> <class 'pandas.tseries.index.DatetimeIndex'> [2012-06-01 09:30:00, ..., 2012-06-06 15:59:00] Length: 1560, Freq: None, Timezone: None 2012-06-01 09:30:00 0 2012-06-01 09:31:00 1 2012-06-01 09:32:00 2 2012-06-01 09:33:00 3 2012-06-01 09:34:00 4 2012-06-01 09:35:00 5 2012-06-01 09:36:00 6 2012-06-01 09:37:00 7 2012-06-01 09:38:00 8 2012-06-01 09:39:00 9 2012-06-01 09:40:00 10 2012-06-01 09:41:00 11 2012-06-01 09:42:00 12 2012-06-01 09:43:00 13 2012-06-01 09:44:00 14 ... 2012-06-06 15:45:00 1545 2012-06-06 15:46:00 1546 2012-06-06 15:47:00 1547 2012-06-06 15:48:00 1548 2012-06-06 15:49:00 1549 2012-06-06 15:50:00 1550 2012-06-06 15:51:00 1551 2012-06-06 15:52:00 1552 2012-06-06 15:53:00 1553 2012-06-06 15:54:00 1554 2012-06-06 15:55:00 1555 2012-06-06 15:56:00 1556 2012-06-06 15:57:00 1557 2012-06-06 15:58:00 1558 2012-06-06 15:59:00 1559 Length: 1560 10:00:00 2012-06-01 10:00:00 30 2012-06-04 10:00:00 420 2012-06-05 10:00:00 810 2012-06-06 10:00:00 1200 2012-06-01 10:00:00 30 2012-06-04 10:00:00 420 2012-06-05 10:00:00 810 2012-06-06 10:00:00 1200 2012-06-01 10:00:00 30 2012-06-01 10:01:00 31 2012-06-04 10:00:00 420 2012-06-04 10:01:00 421 2012-06-05 10:00:00 810 2012-06-05 10:01:00 811 2012-06-06 10:00:00 1200 2012-06-06 10:01:00 1201 2012-06-01 09:50:00 20 2012-06-01 09:51:00 21 2012-06-01 09:52:00 22 2012-06-01 09:53:00 NaN 2012-06-01 09:54:00 24 2012-06-01 09:55:00 25 2012-06-01 09:56:00 26 2012-06-01 09:57:00 27 2012-06-01 09:58:00 NaN 2012-06-01 09:59:00 29 2012-06-01 10:00:00 30 2012-06-01 10:00:00 30 2012-06-04 10:00:00 419 2012-06-05 10:00:00 810 2012-06-06 10:00:00 1199 Freq: B [Finished in 1.2s]
-
拼接多個數據源
在第七章中曾經介紹了數據拼接的知識,在金融或經濟中,還有另外幾個經常出現的情況:
-
在一個特定的時間點上,從一個數據源切換到另一個數據源
-
用另一個時間序列對當前時間序列中的缺失值“打補丁”
-
將數據中的符號(國家、資產代碼等)替換為實際數據
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse from datetime import time #關於特定時間的數據源切換,就是用concat函數進行連接 data1 = DataFrame(np.ones((6,3),dtype = float),columns = ['a','b','c'],index = pd.date_range('6/12/2012',periods = 6)) data2 = DataFrame(np.ones((6,3),dtype = float)*2,columns = ['a','b','c'],index = pd.date_range('6/13/2012',periods = 6)) print data1 print data2,'\n' spliced = pd.concat([data1.ix[:'2012-06-14'],data2.ix['2012-06-15':]]) print spliced,'\n' #假設data1缺失了data2中存在的某個時間序列 data2 = DataFrame(np.ones((6,4),dtype = float)*2,columns = ['a','b','c','d'],index = pd.date_range('6/13/2012',periods = 6)) spliced = pd.concat([data1.ix[:'2012-06-14'],data2.ix['2012-06-15':]]) print spliced,'\n' #combine_first可以引入合並點之前的數據,這樣也就擴展了'd'項的歷史 spliced_filled = spliced.combine_first(data2) print spliced_filled,'\n' #DataFrame也有一個類似的方法update,它可以實現就地更新,如果只想填充空洞,則必須差U納入overwrite = False才行 #不傳入overwrite會把整條數據都覆蓋 spliced.update(data2,overwrite = False) print spliced,'\n' #上面所講的技術可以將數據中的符號替換為實際數據,但有時利用 DataFrame的索引機制直接進行設置會更簡單一些 cp_spliced = spliced.copy() cp_spliced[['a','c']] = data1[['a','c']] print cp_spliced >>> a b c 2012-06-12 1 1 1 2012-06-13 1 1 1 2012-06-14 1 1 1 2012-06-15 1 1 1 2012-06-16 1 1 1 2012-06-17 1 1 1 a b c 2012-06-13 2 2 2 2012-06-14 2 2 2 2012-06-15 2 2 2 2012-06-16 2 2 2 2012-06-17 2 2 2 2012-06-18 2 2 2 a b c 2012-06-12 1 1 1 2012-06-13 1 1 1 2012-06-14 1 1 1 2012-06-15 2 2 2 2012-06-16 2 2 2 2012-06-17 2 2 2 2012-06-18 2 2 2 a b c d 2012-06-12 1 1 1 NaN 2012-06-13 1 1 1 NaN 2012-06-14 1 1 1 NaN 2012-06-15 2 2 2 2 2012-06-16 2 2 2 2 2012-06-17 2 2 2 2 2012-06-18 2 2 2 2 a b c d 2012-06-12 1 1 1 NaN 2012-06-13 1 1 1 2 2012-06-14 1 1 1 2 2012-06-15 2 2 2 2 2012-06-16 2 2 2 2 2012-06-17 2 2 2 2 2012-06-18 2 2 2 2 a b c d 2012-06-12 1 1 1 NaN 2012-06-13 1 1 1 2 2012-06-14 1 1 1 2 2012-06-15 2 2 2 2 2012-06-16 2 2 2 2 2012-06-17 2 2 2 2 2012-06-18 2 2 2 2 a b c d 2012-06-12 1 1 1 NaN 2012-06-13 1 1 1 2 2012-06-14 1 1 1 2 2012-06-15 1 2 1 2 2012-06-16 1 2 1 2 2012-06-17 1 2 1 2 2012-06-18 NaN 2 NaN 2 [Finished in 1.4s]
-
收益指數和累計收益
金融領域中,收益(return)通常指的是某資產價格的百分比變化。
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse from datetime import time import pandas.io.data as web #下面是2011到2012年檢貧國公公司的股票價格數據 price = web.get_data_yahoo('AAPL','2011-01-01')['Adj Close'] print price[-5:] #對於蘋果公司的股票(當時無股息),計算兩個時間點之間的累計百分比回報只需計算價格的百分比變化即可 print price['2011-10-03'] / price['2011-3-01'] - 1,'\n' #對於其他那些派發股息的股票,要計算賺了(或者賠了……)多少錢就比較復雜了,不過這里所使用的已調整收盤價已經對拆分和股息做出了調整 #不管怎么樣,通常會先算出一個收益指數,它表示單位投資(比如1美元)收益的時間序列 #從收益指數中可以得出許多假設。例如,人們可以決定是否進行利潤再投資 #可以用cumprod計算出一個簡單的收益指數 returns = price.pct_change() ret_index = (1 + returns).cumprod() ret_index[0] = 1 print ret_index,'\n' #得到收益指數之后,計算指定時期內的累計收益就很簡單了 m_returns = ret_index.resample('BM',how = 'last').pct_change() print m_returns['2012'] #當然了,就這個簡單的例子而言(沒有股息也沒有考慮其他調整),上面的結果也能通過重采樣聚合(這里的聚合為時期)從日百分比變化中計算得出 m_rets = (1 + returns).resample('M',how = 'prod',kind = 'period') - 1 print m_rets['2012'] #如果知道了股息的派發日和支付率,就可以將它們計入到每日總收益中 returns[dividend_dates] += dividend_pcts >>> Date 2015-12-14 112.480003 2015-12-15 110.489998 2015-12-16 111.339996 2015-12-17 108.980003 2015-12-18 106.029999 Name: Adj Close 0.0723998822054 Date 2011-01-03 1.000000 2011-01-04 1.005219 2011-01-05 1.013442 2011-01-06 1.012622 2011-01-07 1.019874 2011-01-10 1.039081 2011-01-11 1.036623 2011-01-12 1.045059 2011-01-13 1.048882 2011-01-14 1.057378 2011-01-18 1.033620 2011-01-19 1.028128 2011-01-20 1.009437 2011-01-21 0.991352 2011-01-24 1.023910 ... 2015-11-30 2.698560 2015-12-01 2.676661 2015-12-02 2.652481 2015-12-03 2.627845 2015-12-04 2.715212 2015-12-07 2.698103 2015-12-08 2.696963 2015-12-09 2.637426 2015-12-10 2.649972 2015-12-11 2.581767 2015-12-14 2.565799 2015-12-15 2.520404 2015-12-16 2.539794 2015-12-17 2.485960 2015-12-18 2.418667 Length: 1250 Date 2012-01-31 0.127111 2012-02-29 0.188311 2012-03-30 0.105283 2012-04-30 -0.025970 2012-05-31 -0.010702 2012-06-29 0.010853 2012-07-31 0.045822 2012-08-31 0.093877 2012-09-28 0.002796 2012-10-31 -0.107600 2012-11-30 -0.012375 2012-12-31 -0.090743 Freq: BM Date 2012-01 0.127111 2012-02 0.188311 2012-03 0.105283 2012-04 -0.025970 2012-05 -0.010702 2012-06 0.010853 2012-07 0.045822 2012-08 0.093877 2012-09 0.002796 2012-10 -0.107600 2012-11 -0.012375 2012-12 -0.090743 Freq: M [Finished in 3.0s]
2、分組變換和分析
在第九章中,已經學習了分組統計的基礎,還學習了如何對數據集的分組應用自定義的變換函數。
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time from pandas.tseries.offsets import Hour,Minute,Day,MonthEnd import pytz import random;random.seed(0) import string #首先生成1000個股票代碼 N = 1000 def rands(n): choices = string.ascii_uppercase #choices為ABCD……XYZ return ''.join([random.choice(choices) for _ in xrange(n)]) tickers = np.array([rands(5) for _ in xrange(N)]) print tickers,'\n' #然后創建一個含有3列的DataFrame來承載這些假想數據,不過只選擇部分股票組成該投資組合 M = 500 df = DataFrame({'Momentum':np.random.randn(M) / 200 + 0.03, 'Value':np.random.randn(M) / 200 + 0.08, 'ShortInterest':np.random.randn(M) / 200 - 0.02}, index = tickers[:M]) print df,'\n' #接下來創建一個行業分類 ind_names = np.array(['FINANCIAL','TECH']) sampler = np.random.randint(0,len(ind_names),N) industries = Series(ind_names[sampler],index = tickers,name = 'industry') print industries,'\n' #現在就可以進行分組並執行分組聚合和變換了 by_industry = df.groupby(industries) print by_industry.mean(),'\n' print by_industry.describe(),'\n' #需要對這些行業分組的投資組合進行各種變換,可以編寫自定義的變換函數,例如行業內標准化處理,它廣泛應用於股票資產投資組合的構建過程 #行業內標准化處理 def zscore(group): return (group - group.mean()) / group.std() df_stand = by_industry.apply(zscore) #標准化以后,各行業的均值為0,標准差為1 print df_stand.groupby(industries).agg(['mean','std']),'\n' #內置變換函數(比如rank)的用法會更簡潔一些 ind_rank = by_industry.rank(ascending = False) print ind_rank.groupby(industries).agg(['min','max']),'\n' #在股票投資組合的定量分析中,排名和標准化是一種常見的變換運算組合。通過rank和zscore鏈接在一起即可完成整個過程 #行業內排名和標准化,這是把排名進行了標准化 print by_industry.apply(lambda x : zscore(x.rank())).head() >>> ['VTKGN' 'KUHMP' 'XNHTQ' 'GXZVX' 'ISXRM' 'CLPXZ' 'MWGUO' 'ASKVR' 'AMWGI' 'WEOGZ' 'ULCIN' 'YCOSO' 'VOZPP' 'LPKOH' 'EEPRM' 'CTWYV' 'XYOKS' 'HVWXP' 'YPLRZ' 'XUCPM' 'QVGTD' 'FUIVC' 'DSBOX' 'NRAQP' 'OKJZA' 'AYEDF' 'UYALC' 'GFQJE' 'NBCZF' 'JTVXE' 'RZBRV' 'IGPLE' 'MKONI' 'JVGOA' 'TIBHG' 'YJHJY' 'QQSKK' 'QAFIG' 'QJWOK' 'KSKRB' 'LGENM' 'OTWMI' 'MVWVE' 'ZQCSZ' 'KRIFS' 'AVNCD' 'QWHZC' 'WKCHL' 'UWDNQ' 'JWHAB' 'ROYYX' 'BTSRS' 'XQJNF' 'PADIU' 'SIQBE' 'ZHKOH' 'MGBEN' 'BKIKC' 'XMVZI' 'MSLHT' 'XXQJZ' 'QBCTB' 'AKNLM' 'PRKJZ' 'GULJB' 'WSXLR' 'DKFBY' 'FDFJO' 'DZZDK' 'RWMXI' 'MMRFP' 'FIZXV' 'ADGUV' 'PSUBC' 'WBFBA' 'VIEDR' 'ZNXNO' 'RUTZT' 'XFNPV' 'MUKPW' 'URAEN' 'GBWYH' 'KVBQD' 'HVBAK' 'MWSRD' 'ZKPKB' 'MDAQQ' 'COJJU' 'MWPMQ' 'IDRQU' 'DXUXW' 'RVNUE' 'ULTLU' 'BBYMX' 'YROFC' 'VXUSK' 'HCLOX' 'YKCUT' 'ALRAX' 'ZSCBJ' 'AJAZV' 'BXFFR' 'YDAJA' 'PWECI' 'YDZJM' 'HYYQE' 'ZCPEX' 'YUIGT' 'HKBDA' 'CBKOT' 'DKQCL' 'JYBKK' 'SIFHM' 'YUHOR' 'ULKTL' 'GLYDM' 'QMFAS' 'QAHTQ' 'OESMR' 'TGTHZ' 'DWBGN' 'PKCGH' 'TXPAU' 'HINGX' 'EKHNO' 'QNKQK' 'UUHJQ' 'ESJPD' 'RJMKM' 'SIQBH' 'TBQAM' 'XANBW' 'RTRAB' 'QZWSS' 'FTHCL' 'IEKXL' 'LSNDX' 'LUKUK' 'FFYPB' 'KGCEB' 'QEPPS' 'NHWJL' 'QNYYY' 'YPMSF' 'GBEAR' 'DURZK' 'XLICW' 'UILIA' 'BJFNE' 'FRTIW' 'GISBY' 'BLSBV' 'ZLDCC' 'TKXLC' 'LTVBE' 'MDWYI' 'LOHOF' 'HLPNG' 'GDUCT' 'GHTRT' 'NWDQD' 'TJRSR' 'FVGNR' 'GQHEV' 'OIPAD' 'KZNBT' 'UUOSF' 'TVTJP' 'QXCVN' 'JLAFQ' 'RMYMI' 'WGPSV' 'UJBAS' 'ZILSR' 'GRHJO' 'TDAQA' 'BFRBB' 'ZKXFL' 'JEIZT' 'JKGNT' 'RMBXK' 'KADWN' 'TDIVV' 'GAUEU' 'RECYP' 'QLDPG' 'UTAYA' 'CHDGJ' 'TKHMK' 'IXOZU' 'OGRLS' 'KMATA' 'RPUHR' 'FNIZZ' 'FOIZY' 'PSRJX' 'XITAV' 'OYJQI' 'UPZAD' 'BDYYM' 'YVUTE' 'OLYEE' 'RECNU' 'PTGHL' 'ZSYNO' 'ZEUUV' 'TERYO' 'JYOKP' 'UFANY' 'RQQMT' 'GXHYY' 'CLTLN' 'USYEY' 'YQYGW' 'UPCAA' 'GTKUQ' 'KWAMV' 'DSIAM' 'NBOJA' 'ECOBY' 'VYIXZ' 'THRPK' 'HBDDM' 'QTBVB' 'OTQYI' 'PCOVF' 'GSGGY' 'ZEXOB' 'PQATV' 'BQLGL' 'EIRMO' 'OVIVC' 'LJLVN' 'ZWDIA' 'TAXFK' 'BIKBT' 'SJVCB' 'JXTRO' 'UKAUE' 'KJDJE' 'QQALO' 'WMCBW' 'UWGQC' 'VIYMA' 'XQHAJ' 'EODEX' 'QGWQY' 'MXRBG' 'HEGFW' 'MWDOA' 'YAKUZ' 'AVNAU' 'CPURJ' 'ALXIG' 'DNNBK' 'RZLLM' 'FKQKP' 'VZXJA' 'PMGBI' 'UZCWB' 'SAKWK' 'YSPEI' 'KPZHN' 'YIQTQ' 'TFYEP' 'HQHLR' 'GSJDP' 'ELKBS' 'RUOOE' 'FFNVJ' 'WTSID' 'ZWKWX' 'INISX' 'ZTHWZ' 'JYNZZ' 'VREAP' 'SYNSQ' 'FQZCR' 'PJUAX' 'VMCLP' 'GMUXO' 'VCWXE' 'VWWNP' 'FSKAD' 'KXOXY' 'CPINL' 'MCVMQ' 'MEOEW' 'VDBBK' 'YOGFC' 'DVDWV' 'DOAWO' 'TMRYO' 'WICAF' 'VIFMY' 'NICOF' 'JJTGY' 'TMWJJ' 'DUKNM' 'RJXLJ' 'KTZWM' 'HLIGE' 'YMCJK' 'NZZOQ' 'RNMIR' 'CIXFZ' 'ZOBCF' 'ICUJG' 'BJADP' 'YFTFA' 'XVBEG' 'YJULJ' 'TQHPS' 'VRQPC' 'YSHSQ' 'RJORF' 'NDCXD' 'XMLIK' 'JOIVG' 'GMYAS' 'AKTLL' 'GMFHQ' 'PYZNC' 'HNRYE' 'AWUTM' 'RKEKU' 'UYXRN' 'SEYSP' 'LQQXO' 'FLGXV' 'OFBDL' 'RFRWT' 'LQZXI' 'REOJL' 'LRVMD' 'TTYKM' 'OXSAF' 'WPWKF' 'AZDQM' 'JNCZM' 'AKTIT' 'TGZAW' 'NDGPH' 'VFJNI' 'VGCEQ' 'OBZMI' 'SALRZ' 'TXDLA' 'HJIJU' 'EVOIO' 'EMAWI' 'IZPKU' 'BONWP' 'MNUJO' 'SZSZK' 'PTJGZ' 'JZWFV' 'ZHRUC' 'VISYA' 'PHCSZ' 'NJLKN' 'KCZZE' 'GLSAV' 'QHWRI' 'OZBSW' 'SDPUK' 'PGIVM' 'LDJNG' 'VJGIV' 'XYATN' 'DGHKM' 'YBSWJ' 'GFHDO' 'GAGVK' 'WYGOW' 'PEGZH' 'WUTSU' 'WBENF' 'NMDCA' 'VCZZT' 'LHKIK' 'SXEGF' 'BWNBL' 'LUTDQ' 'NADRJ' 'ZNRDM' 'TVFKM' 'LMHVX' 'JQJPS' 'NRTVE' 'FNNVN' 'XUNVM' 'ILLQB' 'SZLBF' 'CGUAW' 'YEEZJ' 'VAZAP' 'YVIVK' 'MJJPU' 'STANJ' 'FXFEE' 'RPNPY' 'WXBXA' 'QYNKI' 'XYQSI' 'DZRHZ' 'PIXCU' 'ECGSP' 'KEYTR' 'VUCUV' 'TMRCT' 'GUQNN' 'BBAUD' 'DKZXI' 'VCKPZ' 'BKXAP' 'KOSKX' 'YZBVW' 'DYDFY' 'VOHGK' 'AQBUB' 'AHUVN' 'YCITI' 'DPBJG' 'IMNCI' 'JKMLB' 'FQVND' 'BEJOR' 'NOJRS' 'KVTXM' 'IUAJZ' 'INGCC' 'LQONC' 'BJYEH' 'MXYAQ' 'GRTXR' 'WUEBT' 'NZEKH' 'WMXSZ' 'PZELP' 'ZOWQN' 'KJHFZ' 'NWXYG' 'IQIDT' 'ONSCX' 'MRPPS' 'CMFKN' 'JKTBY' 'PEOCN' 'RKIRY' 'WKWCB' 'ZTPOI' 'XWMFR' 'SZUCZ' 'WPITP' 'COAUV' 'SCPKD' 'XYXNW' 'FHXPF' 'DFMHT' 'HCXRZ' 'EWANM' 'DVHXS' 'WWUTA' 'DFPAD' 'MBIFE' 'IWGQT' 'REJHO' 'ZFOBJ' 'YXCWS' 'XLKXN' 'TEBLI' 'NIWTJ' 'DSODA' 'QNOKM' 'KCNDR' 'TEFGP' 'KXONF' 'CYCDF' 'OQLKQ' 'GUQPA' 'ITFQZ' 'LNFAG' 'MPVHI' 'SRSWC' 'DMPRF' 'DXGAH' 'NQTDN' 'ISJVE' 'ZNLSJ' 'ALHRN' 'VMEDO' 'GFYDX' 'NYVHM' 'CJYCG' 'BWRPM' 'GPSUE' 'QRPSN' 'YQQAD' 'PTDQE' 'TVQBH' 'HMUPZ' 'SDZBI' 'QKAHG' 'UPDWF' 'IWTKN' 'ZSSRZ' 'YHLQG' 'QXHJL' 'KQBUH' 'QLPON' 'EACKG' 'MMNDM' 'YEAIS' 'WCAIQ' 'XIUDQ' 'GTXLR' 'JWKPZ' 'OLCYN' 'SRGQC' 'BVPHN' 'ORADC' 'TLFJR' 'LOYKC' 'CSICU' 'XCQTG' 'VRLEG' 'VESOO' 'ADIQJ' 'GJMPO' 'JLUPZ' 'PHNMW' 'TWSGH' 'EWXIA' 'MUSRA' 'CSVEV' 'YPOAK' 'MYLAO' 'BZRSS' 'YKHCA' 'MTTAQ' 'VWUKS' 'SBBIQ' 'JQTUH' 'ZOQQR' 'ERLZS' 'ZZVPP' 'MJKXQ' 'EALLB' 'FIJQE' 'VMBCY' 'AQERZ' 'XLLHL' 'YAMXC' 'DVHUH' 'AVILB' 'QVFYQ' 'OFWLB' 'YJHBA' 'BWWMC' 'DYOUB' 'BUDVY' 'LCSLN' 'XODJW' 'NCNAW' 'GSZXN' 'ISOXG' 'SDKUJ' 'HJJAD' 'TSQDD' 'MMDZV' 'WERVI' 'ZCUDG' 'EDRGU' 'UYUZO' 'AIKZK' 'HUXBZ' 'SZQAR' 'FZYWS' 'GYVQE' 'FOPKV' 'RGAPI' 'XGOFZ' 'QTXLO' 'LQIVJ' 'UAJMX' 'STQXS' 'QXTAW' 'ETKKE' 'LZVTQ' 'FBYXA' 'XTCEE' 'GXKOL' 'MGIGH' 'PAYNN' 'KTTSZ' 'KCUSA' 'MVYJM' 'LTSME' 'PAJIB' 'CULDY' 'ILSEU' 'VMSSZ' 'UJNKN' 'XCXND' 'YFAMO' 'BQOOC' 'JDMJI' 'WQCRZ' 'JURMK' 'FKGMR' 'XDVTQ' 'EBDIH' 'VIEZS' 'UMCPL' 'ICIHJ' 'SDJTI' 'WWEQQ' 'EOMGS' 'XXCJC' 'MRSBC' 'QVPCC' 'PFTHV' 'XNSTQ' 'QKXEE' 'SFNXJ' 'TWRCN' 'UZLBJ' 'MYBXL' 'CTDDG' 'ORWPQ' 'MNRHH' 'QQEFO' 'VIEBN' 'NPORW' 'IUFIM' 'NTATU' 'AOADW' 'BXRTR' 'TTXJJ' 'QNRJK' 'KBTOX' 'TKUBQ' 'YXIHH' 'XIKIG' 'WLNKI' 'KXHSF' 'XMHLT' 'WVDZM' 'YEYFW' 'HVEWR' 'DYLEV' 'BATCT' 'CYDOQ' 'JCMIX' 'FFPLH' 'DVCXY' 'DYGUI' 'LSOTK' 'BIXUY' 'PIMMG' 'WBIZO' 'YAVQW' 'TZITV' 'SUVHH' 'KAXVD' 'VIPML' 'PXKAW' 'YUEKT' 'WWYQD' 'KYDYJ' 'PVCCM' 'XZREU' 'JGPLN' 'ZAWLV' 'WTMNP' 'KSWIY' 'OHESH' 'VYJJH' 'GZVWA' 'YVVYK' 'BONFT' 'ZSUUV' 'EPPWL' 'GNMAB' 'EMRNO' 'ZCJOU' 'WQRXU' 'PAKBZ' 'VICOJ' 'SVPVA' 'GLMVE' 'ONQAB' 'CKPTQ' 'CWKVE' 'JRQNY' 'VPRKN' 'QVFLE' 'FADTI' 'HDOKB' 'JUTZW' 'MUUKK' 'OLQVX' 'QNFKF' 'SODEA' 'CQQNU' 'OGTJB' 'FLPUW' 'UTPFR' 'SGJHZ' 'SJFIG' 'VEJNG' 'EYXAN' 'BLCUF' 'HCZNK' 'OEUHW' 'THMHY' 'WORCU' 'JGIXC' 'ZMQNB' 'TKMVI' 'DBRBP' 'BCKHC' 'YYZZX' 'OKTNF' 'UEQYN' 'DJNLD' 'PWZER' 'ADNPG' 'TXZCR' 'NSYHW' 'CBYGS' 'FPIST' 'XULDR' 'LSZIL' 'IIGJJ' 'JYJAZ' 'PIUAW' 'DADVB' 'XHFRG' 'OTRIQ' 'QALQJ' 'EXAWE' 'ZXCTR' 'WMIUT' 'NZUMY' 'NVBDF' 'GDBOP' 'SPLLA' 'QQQZS' 'FPORP' 'TLOYG' 'RTVOZ' 'ZNLCP' 'IRSWT' 'VVZIP' 'OORYA' 'IMNXM' 'GWNUG' 'YKLMI' 'TOPPH' 'ICKGJ' 'QRTFE' 'ZMMCR' 'EBGZI' 'BREHM' 'OHYTQ' 'BVVVV' 'EXHGA' 'EJGWB' 'LJWVI' 'RLGSM' 'STOGD' 'ALRVH' 'BMIIB' 'NLSRJ' 'BZFTO' 'ATZDS' 'OOLLE' 'JRWTQ' 'FAWTY' 'DZIOA' 'UYYED' 'OKAHU' 'TGXJU' 'IZRJB' 'EYOUF' 'NPYNY' 'IPQCE' 'GGEKC' 'DWELC' 'UGRVK' 'FELPR' 'PGADL' 'ZOZVZ' 'HRLWP' 'KYVML' 'KKIEV' 'DQEWF' 'DAXUI' 'WNFOA' 'SHJWQ' 'YFAWP' 'DGYPO' 'UXKVW' 'YMXVM' 'SDTKV' 'IUHMK' 'ZGVVY' 'SMYAX' 'TFHWC' 'HDFDJ' 'WPECZ' 'GHEHY' 'JKIPN' 'GCOOB' 'ILMIR' 'UDKRP' 'WWOXU' 'DAKJA' 'VNZVV' 'KYSCO' 'FQKBX' 'XQFKS' 'DFCYT' 'DGRZF' 'WLWAN' 'KUYCT' 'JWWZN' 'MFBXC' 'LCAIC' 'HZGUU' 'BKDID' 'DSBMO' 'LGQCE' 'HLFQG' 'MURPV' 'KLKDP' 'MUPIW' 'HHWZF' 'FPAVL' 'TAUOA' 'BKQLQ' 'XJKUB' 'IBQRR' 'QTAQQ' 'VTAPK' 'HCDEG' 'MPGFN' 'MBFKY' 'LGWZE' 'WMGVO' 'OSKST' 'VBHPR' 'PUIQN' 'VHNGY' 'ZHWLH' 'JTZVV' 'HFGYE' 'FUWGN' 'JWPMR' 'ZQVVG' 'KFPUK' 'ILSBY' 'ROACD' 'QOEWI' 'DHTMQ' 'HQGAS' 'LHNJU' 'ZQBRA' 'KCWAR' 'MNGJL' 'LMVQW' 'NNQST' 'OKHTB' 'YZXXY' 'PVELA' 'JLESE' 'ETXYW' 'FPNBF' 'OWTVL' 'YPHIJ' 'DQNVC' 'DIFGJ' 'JNTNX' 'BRRYO' 'SLBCL' 'QJEWY' 'IHDZO' 'OIUER' 'RYVLS' 'NDOSY' 'OGNSH' 'DFHFH' 'BLRAE' 'CKWGK' 'JECFB' 'KTHZN' 'TOOUU' 'FEEGC' 'GKQAP' 'SXROH' 'JPUJR' 'JQBCR' 'WITUB' 'RLFFV' 'RKMEZ' 'NBKWI' 'TCTPL' 'FMUBG' 'HHRYP' 'RMMSR' 'JMBIT' 'ZMLDR' 'XBQFD' 'RHCSW' 'PTRNL' 'OHAQX' 'JLSAG' 'UQBJP' 'DYEPB' 'GDTZX' 'FQXRB' 'LIQBR' 'BBFWT' 'GAGYO' 'CSQWZ' 'WRWQF' 'NHGEB' 'HWNNC' 'YDZEB' 'YAHRO' 'TITBA' 'GVCTY' 'ZCGBT' 'LGXPZ' 'RCDDJ' 'UXBKY' 'UHXDI' 'FLNBE' 'QVCYN' 'UBJMO' 'ZNXZX' 'JWISS' 'PXUVL' 'XSIKG' 'WJJKO' 'HVRUB' 'JFFIW' 'JBWUO' 'BDJKH' 'TZLEL' 'DMKZJ' 'DYSSI' 'FSHPC' 'LQOQF' 'LBBFQ' 'UPJAS' 'WAQIK' 'TLOSR' 'MNQAA' 'MSYNR' 'XCDMO' 'HKWNO' 'YPMEE' 'NBNWK' 'VTVIP' 'QCERS' 'BMMLT' 'ZAYTJ' 'GPNYZ' 'UWRUN' 'SLXXZ' 'GFQJK' 'URWVM' 'CKUJV' 'KVTRA' 'YJCAW' 'KPZYQ' 'TQKDF' 'HFYFG' 'SZQHV' 'EQMTM' 'QAVSF' 'YZQVZ' 'HEIZH' 'LWELG' 'KYLBX' 'WIBMQ' 'LBCTN' 'QHLIR' 'LCRZK' 'AOQJX' 'YMNLR' 'BHUFN' 'FCKOG' 'ILSDP' 'PZPKM' 'PNRHG' 'ZYTZZ'] <class 'pandas.core.frame.DataFrame'> Index: 500 entries, VTKGN to PTDQE Data columns: Momentum 500 non-null values ShortInterest 500 non-null values Value 500 non-null values dtypes: float64(3) VTKGN FINANCIAL KUHMP TECH XNHTQ TECH GXZVX FINANCIAL ISXRM TECH CLPXZ FINANCIAL MWGUO FINANCIAL ASKVR TECH AMWGI TECH WEOGZ TECH ULCIN FINANCIAL YCOSO FINANCIAL VOZPP TECH LPKOH TECH EEPRM FINANCIAL ... HEIZH FINANCIAL LWELG FINANCIAL KYLBX TECH WIBMQ TECH LBCTN TECH QHLIR TECH LCRZK TECH AOQJX TECH YMNLR TECH BHUFN TECH FCKOG FINANCIAL ILSDP FINANCIAL PZPKM TECH PNRHG TECH ZYTZZ FINANCIAL Name: industry, Length: 1000 Momentum ShortInterest Value industry FINANCIAL 0.030406 -0.019782 0.079936 TECH 0.029859 -0.019990 0.079252 Momentum ShortInterest Value industry FINANCIAL count 249.000000 249.000000 249.000000 mean 0.030406 -0.019782 0.079936 std 0.005043 0.004875 0.004700 min 0.018415 -0.032969 0.068910 25% 0.026759 -0.023331 0.076816 50% 0.029893 -0.019888 0.079846 75% 0.033647 -0.016402 0.082657 max 0.043659 -0.005031 0.093285 TECH count 251.000000 251.000000 251.000000 mean 0.029859 -0.019990 0.079252 std 0.005144 0.004759 0.005039 min 0.016133 -0.034532 0.063635 25% 0.026706 -0.023397 0.076275 50% 0.029469 -0.020236 0.079372 75% 0.033749 -0.016660 0.083053 max 0.042830 -0.006840 0.092007 Momentum ShortInterest Value mean std mean std mean std industry FINANCIAL -1.110223e-15 1 1.213665e-15 1 -2.380960e-16 1 TECH -3.696246e-15 1 1.084568e-15 1 1.438977e-14 1 Momentum ShortInterest Value min max min max min max industry FINANCIAL 1 249 1 249 1 249 TECH 1 251 1 251 1 251 Momentum ShortInterest Value VTKGN -1.332883 1.180157 -1.263462 KUHMP -0.964165 -0.096417 -0.771332 XNHTQ 1.349832 -0.909070 -0.165285 GXZVX -1.249578 -1.110736 1.055199 ISXRM 0.922844 -1.005487 1.101903 [Finished in 0.8s]
-
分組因子暴露
因子分析(factor analysis)是投資組合定量管理中的一種技術。投資組合的持有量和性能(收益與損失)可以被分解為一個或多個表示投資組合權重的因子(風險因子就是其中之一)。例如,某只股票與某個基准(比如標普500指數)的協動性被稱為其beta風險系數。下面以一個人為構成的投資的投資組合為例進行講解,它由三個隨機生成的因子(通常稱為因子載荷)和一些權重構成。
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time from pandas.tseries.offsets import Hour,Minute,Day,MonthEnd import pytz import random;random.seed(0) import string from numpy.random import rand import pandas.io.data as web #首先生成1000個股票代碼 N = 1000 def rands(n): choices = string.ascii_uppercase #choices為ABCD……XYZ return ''.join([random.choice(choices) for _ in xrange(n)]) tickers = np.array([rands(5) for _ in xrange(N)]) #然后創建一個含有3列的DataFrame來承載這些假想數據,不過只選擇部分股票組成該投資組合 M = 500 df = DataFrame({'Momentum':np.random.randn(M) / 200 + 0.03, 'Value':np.random.randn(M) / 200 + 0.08, 'ShortInterest':np.random.randn(M) / 200 - 0.02}, index = tickers[:M]) #接下來創建一個行業分類 ind_names = np.array(['FINANCIAL','TECH']) sampler = np.random.randint(0,len(ind_names),N) industries = Series(ind_names[sampler],index = tickers,name = 'industry') fac1,fac2,fac3 = np.random.rand(3,1000) ticker_subset = tickers.take(np.random.permutation(N)[:1000]) print ticker_subset[:10],'\n' #因子加權和以及噪聲 port = Series(0.7 * fac1 - 1.2 * fac2 + 0.3 * fac3 + rand(1000), index = ticker_subset) factors = DataFrame({'f1':fac1,'f2':fac2,'f3':fac3},index = ticker_subset) print factors.head(),'\n' #各因子與投資組合之間的矢量相關性可能說明不了什么問題 print factors.corrwith(port),'\n' #計算因子暴露的標准方式是最小二乘回歸。 print pd.ols(y = port,x = factors).beta #可以看出,由於沒有給投資組合添加過多的隨機噪聲,所以原始因子基本恢復了。還可以通過groupby計算各行業的暴露量 def beta_exposure(chunk,factors = None): return pd.ols(y = chunk,x = factors).beta #根據行業進行分組,並應用該函數 by_ind = port.groupby(industries) exposures = by_ind.apply(beta_exposure,factors = factors) print exposures.unstack() >>> ['ECOBY' 'BIXUY' 'WICAF' 'UAJMX' 'VGCEQ' 'ECGSP' 'REJHO' 'KEYTR' 'LWELG' 'UZLBJ'] f1 f2 f3 ECOBY 0.851706 0.259984 0.097494 BIXUY 0.937227 0.743504 0.883864 WICAF 0.833994 0.429274 0.871291 UAJMX 0.598321 0.697040 0.631816 VGCEQ 0.157317 0.438006 0.410215 f1 0.426756 f2 -0.708818 f3 0.153762 f1 0.717723 f2 -1.261801 f3 0.307803 intercept 0.507541 f1 f2 f3 intercept industry FINANCIAL 0.702203 -1.264149 0.275483 0.538837 TECH 0.732947 -1.260257 0.342993 0.474543 [Finished in 0.9s]
-
十分位和四分位分析
基於樣本分位數的分析是金融分析師們的另一個重要工具,例如,股票投資組合的性能可以根據各股的市盈率被划分入四分位。
#-*- coding:utf-8 -*- import numpy as np import pandas as pd import matplotlib.pyplot as plt import datetime as dt from pandas import Series,DataFrame from datetime import datetime from dateutil.parser import parse import time from pandas.tseries.offsets import Hour,Minute,Day,MonthEnd import pytz import random;random.seed(0) import string from numpy.random import rand import pandas.io.data as web #通過pandas.qcut和groupby可以輕松實現分位數分析 data = web.get_data_yahoo('SPY','2006-01-01','2012-07-27') #設置一個尾巴值 print data #接下來計算日收益率,並編寫一個用於將收益率轉換為趨勢信號的函數 px = data['Adj Close'] returns = px.pct_change() def to_index(rets): index = (1 + rets).cumprod() #下面是將第一個有效值的位置拿出來 first_loc = max(index.notnull().argmax() - 1,0) index.values[first_loc] = 1 return index def trend_signal(rets,lookback,lag): signal = pd.rolling_sum(rets,lookback,min_periods = lookback - 5) return signal.shift(lag) #通過該函數,我們可以單純地創建和測試一種根據每周五動量信號進行交易的交易策略 signal = trend_signal(returns,100,3) print signal trade_friday = signal.resample('W-FRI').resample('B',fill_method = 'ffill') trade_rets = trade_friday.shift(1) * returns #然后將該策略的收益率轉換為一個收益指數,並繪制一張圖表 to_index(trade_rets).plot() plt.show() #假如希望將該策略的性能按不同大小的交易期波幅進行划分。年度標准差是計算波幅的一種簡單辦法,可以通過計算夏普比率來觀察不同波動機制下的風險收益率: vol = pd.rolling_std(returns,250,min_periods = 200) * np.sqrt(250) def sharpe(rets,ann = 250): return rets.mean() / rets.std() * np.sqrt(ann) #現在利用qcut將vol划分為4等份,並用sharpe進行聚合: print trade_rets.groupby(pd.qcut(vol,4)).agg(sharpe) #上面的結果說明,該策略在波幅最高時性能最好 >>> <class 'pandas.core.frame.DataFrame'> DatetimeIndex: 1655 entries, 2006-01-03 00:00:00 to 2012-07-27 00:00:00 Data columns: Open 1655 non-null values High 1655 non-null values Low 1655 non-null values Close 1655 non-null values Volume 1655 non-null values Adj Close 1655 non-null values dtypes: float64(5), int64(1) Date 2006-01-03 NaN 2006-01-04 NaN 2006-01-05 NaN 2006-01-06 NaN 2006-01-09 NaN 2006-01-10 NaN 2006-01-11 NaN 2006-01-12 NaN 2006-01-13 NaN 2006-01-17 NaN 2006-01-18 NaN 2006-01-19 NaN 2006-01-20 NaN 2006-01-23 NaN 2006-01-24 NaN ... 2012-07-09 0.028628 2012-07-10 0.031504 2012-07-11 0.014558 2012-07-12 0.014559 2012-07-13 0.010499 2012-07-16 -0.000425 2012-07-17 -0.007916 2012-07-18 0.008422 2012-07-19 0.009289 2012-07-20 0.011745 2012-07-23 0.016956 2012-07-24 0.017897 2012-07-25 0.005832 2012-07-26 -0.000354 2012-07-27 -0.014123 Length: 1655 [0.0954, 0.16] 0.491098 (0.16, 0.188] 0.425300 (0.188, 0.231] -0.687110 (0.231, 0.457] 0.570821 [Finished in 6.1s]

