一次面試時遇到一個問題:
用awk 統計訪問ip地址在一分鍾內的訪問次數
192.168.10.232 - - [23/Feb/2009:08:50:27 +0800] "GET /images/search_icon03.png HTTP/1.1" 200
經過在chinaunix請教高人得到答案:
awk -F'[[/:]' '/192.168.10.232/{a[$2"/"$3"/"$4":"$5":"$6]++}END{for (i in a) print i,a[i]}' urfile
awk -F'[[/:]' '{a[$2"/"$3"/"$4":"$5":"$6]++}END{for (i in a) print i,a}' urfile
同時得到另一個答案,統計ip地址在一分鍾內的平均值:
awk '/192.168.10.232/ && /23\/Feb\/2009/' urfile |wc -l |awk '{print $1/1440}'
這個命令可以用來統計日志文件中ip地址的訪問次數,非常實用。
=================================================================================================================================
awk統計ip訪問次數
現在有一個文件,數據量大概在200多萬條記錄,想用shell的awk做統計,文件的格式如下
#關鍵字#URL#IP地址#
test|123|1
test|123|1
test|123|2
test2|12|1
test2|123|1
test2|123|2
現在想要統計的結果是:查看同一個關鍵字和URL總的訪問的次數,以及多少個不同的IP,輸出到一個文件中
SQL的實現就很簡單 select keyword ,url ,count(1),count(distinct IP) group by keyword ,url ,但是數據量太大,報表跑不出來,想在shell下面實現,但是我shell不精通,不知道如何快捷的實現,尤其是那個distinct的那個
理想的結果是:
#關鍵字#URL#不同IP#搜索次數
test 123 2 3
test2 123 1 2
test2 12 1 1
wk -F"|" '{a[$1" "$2]++;b[$1" "$2" "$3]++}(b[$1" "$2" "$3]==1){++c[$1" "$2]}END{ for (i in a) print i,c[i],a[i]}' file
test2 123 2 2
test2 12 1 1
test 123 2 3
統計一天apache日志每小時每IP訪問次數
日志格式如下:
127.0.0.1 - - [03/Feb/2013:14:18:10 +0800] "GET /ucenterrvicecenter/SCenterRequest.php HTTP/1.0" 302 242
127.0.0.1 - - [03/Feb/2013:14:18:10 +0800] "GET /ucenterrvicecenter/SCenterRequest.php HTTP/1.0" 200 -
111.111.111.35 - - [03/Feb/2013:14:18:32 +0800] "GET /myadmin/ HTTP/1.1" 401 933
111.111.111.35 - root [03/Feb/2013:14:18:33 +0800] "GET /myadmin/ HTTP/1.1" 200 1826
111.111.111.35 - root [03/Feb/2013:14:18:34 +0800] "GET /myadmin/main.php?token=67b1c9d29f9ac9107627bb991c8d2ca6 HTTP/1.1" 200 7633
111.111.111.35 - - [03/Feb/2013:14:18:34 +0800] "GET /myadmin/css/print.css?token=67b1c9d29f9ac9107627bb991c8d2ca6 HTTP/1.1" 200 1063
111.111.111.35 - root [03/Feb/2013:14:18:34 +0800] "GET /myadmin/css/phpmyadmin.css.php?token=67b1c9d29f9ac9107627bb991c8d2ca6&js_frame=right&nocache=1359872314 HTTP/1.1" 200 20322
111.111.111.35 - root [03/Feb/2013:14:18:34 +0800] "GET /myadmin/navigation.php?token=67b1c9d29f9ac9107627bb991c8d2ca6 HTTP/1.1" 200 1362
111.111.111.35 - root [03/Feb/2013:14:18:36 +0800] "GET /myadmin/css/phpmyadmin.css.php?token=67b1c9d29f9ac9107627bb991c8d2ca6&js_frame=left&nocache=1359872314 HTTP/1.1" 200 3618
111.111.111.35 - root [03/Feb/2013:14:18:38 +0800] "GET /myadmin/navigation.php?server=1&db=ucenter&table=&lang=zh-utf-8&collation_connection=utf8_unicode_ci HTTP/1.1" 200 9631
代碼如下:
[root@localhost sampdb]# awk -vFS="[:]" '{gsub("-.*","",$1);num[$2" "$1]++}END{for(i in num)print i,num[i]}' data1
14 127.0.0.1 2
14 111.111.111.35 8
awk統計日志中相同ip的訪問次數
現有一日志,需要統計出每個ip訪問的次數
180.153.114.199 - - [03/Jul/2013:14:44:43 +0800] GET /wp-login.php?redirect_to=http%3A%2F%2Fdemo.catjia.com%2Fwp-admin%2Fplugin-install.php%3Ftab%3Dsearch%26s%3DVasiliki%26plugin-search-input%3D%25E6%2590%259C%25E7%25B4%25A2%25E6%258F%2592%25E4%25BB%25B6&reauth=1 HTTP/1.1 200 2355 - Mozilla/4.0 -
101.226.33.200 - - [03/Jul/2013:14:45:52 +0800] GET /wp-admin/plugin-install.php?tab=search&type=term&s=Photogram&plugin-search-input=%E6%90%9C%E7%B4%A2%E6%8F%92%E4%BB%B6 HTTP/1.1 302 0 - Mozilla/4.0 -
101.226.33.200 - - [03/Jul/2013:14:45:52 +0800] GET /wp-login.php?redirect_to=http%3A%2F%2Fdemo.catjia.com%2Fwp-admin%2Fplugin-install.php%3Ftab%3Dsearch%26type%3Dterm%26s%3DPhotogram%26plugin-search-input%3D%25E6%2590%259C%25E7%25B4%25A2%25E6%258F%2592%25E4%25BB%25B6&reauth=1 HTTP/1.1 200 2370 - Mozilla/4.0 -
113.110.176.131 - - [03/Jul/2013:15:03:57 +0800] GET /wp-content/themes/catjia-lio/images/menu_hover_bg.png HTTP/1.1 304 0 http://demo.catjia.com/wp-content/themes/catjia-lio/style.css Mozilla/5.0 (Windows NT 6.2; WOW64; rv:21.0) Gecko/20100101 Firefox/21.0 -
180.153.205.103 - - [03/Jul/2013:15:13:59 +0800] GET /wp-admin/options-general.php HTTP/1.1 302 0 - Mozilla/4.0 -
180.153.205.103 - - [03/Jul/2013:15:13:59 +0800] GET /wp-login.php?redirect_to=http%3A%2F%2Fdemo.catjia.com%2Fwp-admin%2Foptions-general.php&reauth=1 HTTP/1.1 200 2269 - Mozilla/4.0 -
101.226.51.227 - - [03/Jul/2013:15:14:07 +0800] GET /wp-admin/options-general.php?settings-updated=true HTTP/1.1 302 0 - Mozilla/4.0 -
101.226.51.227 - - [03/Jul/2013:15:14:07 +0800] GET /wp-login.php?redirect_to=http%3A%2F%2Fdemo.catjia.com%2Fwp-admin%2Foptions-general.php%3Fsettings-updated%3Dtrue&reauth=1 HTTP/1.1 200 2291 - Mozilla/4.0 -
咋看之下,日志記錄的東西太多了,從何入手?
相信不少人知道可以通過awk提取第一列數據出來,即ip地址。
可是提取出來之后呢?怎么統計每個ip出現的次數?
要說復雜還挺復雜,不過用多了就簡單了。
# awk '{a[$1]+=1;}END{for(i in a){print a[i]" " i;}}' demo.catjia.com_access.log
2 180.153.206.26
120 113.110.176.131
2 101.226.33.200
2 101.226.66.175
2 112.65.193.16
2 101.226.51.227
2 112.64.235.86
2 101.226.33.223
1 101.227.252.23
2 180.153.205.103
2 101.226.33.216
2 112.64.235.89
4 180.153.114.199
2 112.64.235.254
2 180.153.206.34
如果要保存結果,則可以通過重定向保存到文本里。
現在已經統計出每個相同ip的次數了,但是如果數據多的話看起來還比較混亂,比如想要知道訪問次數最多的是哪個ip呢?
那就加個sort排序吧
# awk '{a[$1]+=1;}END{for(i in a){print a[i]" " i;}}' demo.catjia.com_access.log |sort
1 101.227.252.23
120 113.110.176.131
2 101.226.33.200
2 101.226.33.216
2 101.226.33.223
2 101.226.51.227
2 101.226.66.175
2 112.64.235.254
2 112.64.235.86
2 112.64.235.89
2 112.65.193.16
2 180.153.205.103
2 180.153.206.26
2 180.153.206.34
4 180.153.114.199
這樣一看,貌似排序了,但仔細一看,出現120次的ip怎么排在第二位,不是應該排在最后么?
其實這里還需要加個參數-g,否則排序會按第一個字符來排序,就會出現如上的情況。
看看加個-g參數后的結果
sort -n也可以實現
# awk '{a[$1]+=1;}END{for(i in a){print a[i]" " i;}}' demo.catjia.com_access.log |sort -g
1 101.227.252.23
2 101.226.33.200
2 101.226.33.216
2 101.226.33.223
2 101.226.51.227
2 101.226.66.175
2 112.64.235.254
2 112.64.235.86
2 112.64.235.89
2 112.65.193.16
2 180.153.205.103
2 180.153.206.26
2 180.153.206.34
4 180.153.114.199
120 113.110.176.131
http://www.111cn.net/sys/linux/83575.htm
=============================================================================================================================
Linux awk命令處理日志文本與全局變量例子
在使用Linux awk命令處理日志的時候,有時候需要根據日志上下行的關系來截取文本,而linux命令中不管是awk還是grep都是面向一行進行處理,這里需要的是跨行進行處理,再抽象一點其實就是需要一個全局變量,在分析每一行的時候,都能夠使用,並且不會被重置。
awk使用方法
awk '{pattern + action}' {filenames}
盡管操作可能會很復雜,但語法總是這樣,其中 pattern 表示 AWK 在數據中查找的內容,而 action 是在找到匹配內容時所執行的一系列命令。花括號({})不需要在程序中始終出現,但它們用於根據特定的模式對一系列指令進行分組。 pattern就是要表示的正則表達式,用斜杠括起來。
awk全局變量具體方法為:
代碼如下 | 復制代碼 |
cat access.log |awk 'BEGIN{devName="";}...' |
省略號部分為awk處理代碼。
例如:
代碼如下 | 復制代碼 |
cat tmp1.log | awk 'BEGIN{start=0;tmp=0}{if ($12 == "Traceback"){split($0,b,"SCRIPT ");print b[2];start=1;tmp=$11;}else {if (start ==1) {if ($11 == tmp) {split($0,b,"SCRIPT ");print b[2];start=1;}else {start=0;print " "}}}}' |
=================================================================================================================================
文本內容模板如下:
sshd: refused connect from 2d.44.2d.static.xlhost.com (::ffff:173.45.68.45)
sshd(pam_unix)[30680]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=173.45.84.35 user=root
sshd: refused connect from ::ffff:118.102.25.161 (::ffff:118.102.25.161)
sshd: refused connect from 97.6d.7d.seuvenc.luesly.com (::ffff:173.45.91.151)
sshd: refused connect from lucas-98-162-44-80.dc.dc.cox.net (::ffff:98.162.44.80)
sshd(pam_unix)[29765]: check pass; user unknown
sshd(pam_unix)[29765]: authentication failure; logname= uid=0 euid=0 tty=ssh ruser= rhost=219.84.193.41
這是過濾過的文本,
我想用awk提取其中的IP地址,如何做?
grep -oE '[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}\.[0-9]{1,3}' urfile
awk '{if($0~/refused/){split($5,a,"ffff:");print substr(a[2],1,length(a[2])-1)}else {split($9,b,"=");print b[2]}}' urfil
awk -F"::ffff:|rhost=" '{print $NF}' urfile |awk -F" |)" '/^[0-9]/{print $1}'