Graylog安裝配置


ES集群健康檢測:curl -sXGET http://localhost:9200/_cluster/health?pretty=true | grep "status" | awk -F '[ "]+' '{print $4}' | grep -c "green"

ES節點健康檢測:curl -sXGET localhost:9200/_cat/health |awk -F ' ' '{print $4}'

                              -s --silent 靜默模式,就是不顯示錯誤和進度

-------------------------------------------------------------------------------------- 

https://github.com/mobz/elasticsearch-head

https://github.com/lmenezes/elasticsearch-kopf

--------------------------------------------------------------------------------------

注:新版graylog需要 jdk 1.8以上版本支持

 

ES集群索引數據重置方法:

 

 

--------------------------------------------------------------------------------------------

ELASTICSEARCH 服務監控

Elasticsearch 服務本身通過 API 提供了豐富的監控信息,您可以直接調用接口獲取監控數據:

  • 集群健康狀態: http://<任意節點私網IP>:9200/_cluster/health?pretty=true
  • 各節點統計信息: http://<任意節點私網IP>:9200/_nodes/stats?pretty=true
  • 索引統計信息: http://<任意節點私網IP>:9200/_stats?pretty=true

此外,Elasticsearch 服務內置了 kopf 插件(一個 Elasticsearch 的 web 管理界面),可以通過 http://<任意節點私網IP>:9200/_plugin/kopf 在瀏覽器上查看更詳細的統計信息。

--------------------------------------------------------------------------

graylog以及es內存大小調整:

graylog: vim /etc/sysconfig/graylog-server  GRAYLOG_SERVER_JAVA_OPTS=""

es: vim /etc/sysconfig/elasticsearch   ES_HEAP_SIZE=3g 

---------------------------------------------------------------------------

shards  代表索引分片,es可以把一個完整的索引分成多個分片,這樣的好處是可以把一個大的索引拆分成多個,分布到不同的節點上。構成分布式搜索。分片的數量只能在索引創建前指定,並且索引創建后不能更改。

replicas 代表索引副本,es可以設置多個索引的副本,副本的作用一是提高系統的容錯性,當個某個節點某個分片損壞或丟失時可以從副本中恢復。二是提高es的查詢效率,es會自動對搜索請求進行負載均衡。

當配置ES集群時,只需要更改graylog配置文件的:

elasticsearch_discovery_zen_ping_unicast_hosts = 192.168.0.200:9300, 192.168.0.201:9300

-----------------------------------------------------------------------------------------------------

Elasticsearch存儲的索引數據比較多占用空間比較大,清理方法:   (默認存放在/var/lib/elasticsearch/graylog2/nodes/0/indices

# curl -XDELETE 'http://localhost:9200/index_name/'

例如:# curl -XDELETE 'http://192.168.0.200:9200/graylog_0/'

 

新版的graylog已經將graylog-server與graylog-web二部分合二為一了,統稱graylog-server

# Email transport

transport_email_enabled = true
transport_email_hostname = smtp.tech.com
transport_email_port = 465
transport_email_use_auth = true
transport_email_use_tls = true
transport_email_use_ssl = true
transport_email_auth_username = wjoyxt@tech.com 也可能是 wjoyxt
transport_email_auth_password = wjoyxt666
transport_email_subject_prefix = [graylog2]
transport_email_from_email = wjoyxt@tech.com 注意與alert中的sender郵箱保持一致

transport_email_web_interface_url = http://外網IP或域名:9000  

改成外網IP或域名的目的是為了在收到的報警郵件中可以不通過VPN的情況下直接訪問相應的只包含警報關鍵字的日志信息的匯總鏈接,當然了也可以連接VPN使用內網IP地址訪問

-------------------------------------------------------------------------------------------

當使用HTTP Alarm Callback時,可以直接調用短信接口進行告警處理

 

--------------------------------------------------------------------------------------------

Graylog是一個開源的 log 收容器,主要有兩個部分集合而成 server 與 web interface,兩個都是由 Java 寫的,Server 的部份可以收容 syslog over TCP/UDP, 同時他也有自己的格式 GELF (Graylog Extended Log Format),背后的儲存是搭配 mongodb,而搜尋引擎則由 elasticsearch 提供。另外的 web interface 也是 Java 寫成的(早期好像是 Ruby on Rails),主要的功能就是提供一個漂亮的搜尋與分析的界面

ElasticSearch是一個基於Lucene的搜索服務器。它提供了一個分布式多用戶能力的全文搜索引擎,基於RESTful web接口。Elasticsearch是用Java開發的,並作為Apache許可條款下的開放源碼發布,是第二流行的企業搜索引擎。設計用於雲計算中,能夠達到實時搜索,穩定,可靠,快速,安裝使用方便

具體安裝文檔可查看官網手冊: http://docs.graylog.org        可使用有道網頁翻譯書簽: http://fanyi.youdao.com/web2/

端口說明:

Elasticsearch默認tcp通信端口是9300,HTTP API接口是9200

Graylog-server:9000/api     Graylog-web:9000

一、graylog體系結構

測試環境最小化單一安裝結構圖示:

 

生產環境集群擴展模式結構如下:

最前端要加一層負載均衡系統,graylog-server的主從角色是在配置文件里面進行設置,然后mongodb數據庫集群里面進行指定區分的

二、首先安裝配置Elasticsearch    (Mongodb安裝部分此處略,需要注意的是mongodb的bind_ip處的設置,而且graylog會自動創建相應的數據庫和用戶)

1、前往官網進行下載安裝elasticsearch

# wget https://download.elasticsearch.org/elasticsearch/release/org/elasticsearch/distribution/rpm/elasticsearch/2.0.0/elasticsearch-2.0.0.rpm

# yum install elasticsearch-2.0.0.rpm -y

2、或者使用官方yum源進行安裝

rpm --import https://packages.elastic.co/GPG-KEY-elasticsearch

Add the following in your /etc/yum.repos.d/ directory in a file with a .repo suffix, for exampleelasticsearch.repo,(注意2.x或者1.x 只有兩位)

[elasticsearch-2.x] name=Elasticsearch repository for 2.x packages baseurl=http://packages.elastic.co/elasticsearch/2.x/centos gpgcheck=1
gpgkey=http://packages.elastic.co/GPG-KEY-elasticsearch enabled=1

 

yum install elasticsearch

2、vim /etc/elasticsearch/elasticsearch.yml


network.bind_host: 10.1.1.33 #綁定實際的服務器IP地址
cluster.name: graylog2   #與/etc/graylog/server/server.conf 中的elasticsearch_cluster_name 集群名稱相同
node.name: "node-xx"
discovery.zen.ping.multicast.enabled: false discovery.zen.ping.unicast.hosts: ["es-node-1.example.org:9300" , "es-node-2.example.org:9300"]
script.disable_dynamic: true #出於安全考慮,禁用動態腳本功能防止可能的遠程代碼執行

 3、使用curl命令操作elasticsearch

curl -i -XGET 'localhost:9200/'             #安裝完成后的驗證

curl -XGET localhost:9200/_cat/nodes  #查看當前存活的各elasticsearch節點

curl -XGET localhost:9200/_cat/master

curl -XGET localhost:9200/_cat/health

健康度檢查分為綠色、黃色或紅色。綠色代表一切正常,集群功能齊全,黃色意味着所有的數據都是可用的,但是某些復制沒有被分配,紅色則代表因為某些原因,某些數據不可用

需要注意的是如果單機上安裝多個elasticsearch實例的話,elasticsearch的服務端口號會依次自動累加,例如第二套elasticsearch的為9201和9301。

curl更多操作方法參考資料 http://blog.csdn.net/iloveyin/article/details/48312767

 

三、安裝graylog-server和graylog-web   (jdk rpm包安裝時注意采用默認的安裝方式,/usr/bin/java)

$ sudo rpm -Uvh https://packages.graylog2.org/repo/packages/graylog-1.2-repository-el6_latest.rpm

$ sudo yum install graylog-server graylog-web

$vim /etc/graylog/server/server.conf

[root@syslog ~]# cat /etc/graylog/server/server.conf |grep -v grep|grep -v ^#|grep -v ^$
is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = qdp0CjlUtrQUlqtgvjSiG0tI3aA4jX7wYGlR10FD8mmkm8WLQ1j0UnaTL3nCocYu7lFB7zRa6GdEe8x5ZVHBemzwXLJufOMO
root_password_sha2 = 4bbdd5a829dba09d7a7ff4c1367be7d36a017b4267d728d31bd264f63debeaa6
root_email = "wjoyxt@126.com"
root_timezone = +08:00
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://10.1.1.43:12900/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 2
elasticsearch_replicas = 1
elasticsearch_index_prefix = graylog2
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_discovery_zen_ping_multicast_enabled = false
elasticsearch_discovery_zen_ping_unicast_hosts = 10.1.1.43:9300,10.1.1.33:9300
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
dead_letters_enabled = false
lb_recognition_period_seconds = 3
mongodb_useauth = false
mongodb_uri = mongodb://localhost/graylog2
mongodb_max_connections = 100
mongodb_threads_allowed_to_block_multiplier = 5
transport_email_enabled = true
transport_email_hostname = 127.0.0.1
transport_email_port = 25
transport_email_use_auth = false
transport_email_use_tls = false
transport_email_use_ssl = false
transport_email_auth_username = 
transport_email_auth_password = 
transport_email_subject_prefix = 
transport_email_from_email = 
transport_email_web_interface_url = http://10.1.1.43:9000
View Code

 

password_secret = qdp0CjlUtrQUlqtgvjSiG0tI3aA4jX7wYGlR10FD8mmkm8WLQ1j0UnaTL3nCocYu7lFB7zRa6GdEe8x5ZVHBemzwXLJufOMO
#以上項必須與/etc/graylog/web/web.conf中的application.secret相同
root_password_sha2 = 0f3bb51e4955b2872351934b82c293db8cc1770b96bf9047b184a26ae25bcb5c #此處即登陸graylog-web:9000界面的密碼,現在設置的是wjoyxt
#如果沒有shasum命令的話,使用yum install perl-Digest-SHA進行安裝
elasticsearch_cluster_name = graylog2  #定義一個集群名稱 elasticsearch_discovery_zen_ping_multicast_enabled = false # Disable multicast elasticsearch_discovery_zen_ping_unicast_hosts = es-node-1.example.org:9300,es-node-2.example.org:9300 # List of Elasticsearch nodes to connect to
mongodb_useauth = false #默認為false,可不必添加。當為false時,則不必配置mongodb相關的 mongodb_user、mongodb_password


# Email transport 郵件報警相關設置
transport_email_use_auth = false
transport_email_use_tls = false
transport_email_use_ssl = false
transport_email_auth_username =
transport_email_auth_password =
transport_email_subject_prefix =
transport_email_from_email =
transport_email_web_interface_url = http://外網IP:9000 #此處指向可通過外網訪問到graylog-web的域名或IP

 

$vim /etc/graylog/web/web.conf

graylog2-server.uris="http://0.0.0.0:12900" 
application.secret="qdp0CjlUtrQUlqtgvjSiG0tI3aA4jX7wYGlR10FD8mmkm8WLQ1j0UnaTL3nCocYu7lFB7zRa6GdEe8x5ZVHBemzwXLJufOMO"
timezone="Asia/Shanghai"

四、客戶端配置:日志收集graylog-collector   (幫助文檔鏈接  http://docs.graylog.org/en/1.2/pages/collector.html)

Graylog收集器是一個輕量級的Java應用程序,允許您將數據從日志文件轉發給一個Graylog集群.

前提:You need to have Java >= 7 installed to run the collector

目前官方只提供了RHEL7的repo,對於centos 6版本的話從以下鏈接下載zip包  

https://github.com/Graylog2/collector#binary-download  (另外此鏈接包含一個graylog-collector配置例子,可以進行參考)

這是一個最小的配置收集日志的例子/var/log/syslog文件並將它們發送給Graylog服務器:

server-url = "http://10.0.0.1:9000/api"  (新版)

inputs {
  syslog {
    type = "file"
    path = "/var/log/syslog"
  }
}

outputs {
  graylog-server {
    type = "gelf"
    host = "10.0.0.1"
    port = 12201   #此端口是graylog-server服務器上建立Inputs時對應的日志數據采集端口
  }
}

附加另一個使用正則表達式和分流輸出的例子:

server-url = "http://10.1.1.43:12900"
collector-id = "file:/usr/local/graylog-collector/config/collector-id"
inputs {
  ylygw-servlet {
    type = "file"
    path-glob-root = "/home/admin/logs"
    path-glob-pattern = "*/*.log"
    outputs = "ylygw-tcp"
  }
  ylygw-out {
    type = "file"
    path = "/yly/tomcat7.yly9090/logs/catalina.out"
    outputs = "ylygw-tcp"
  }
 
  nginx-access {
    type = "file"
    path = "/usr/local/nginx/logs/shop_acc.log"
    outputs = "nginx-access"
  }
  
  nginx-error {
    type = "file"
    path = "/usr/local/nginx/logs/shop_error.log"
    outputs = "nginx-error"
  }
}
outputs {
  ylygw-tcp {
    type = "gelf"
    host = "10.1.1.43"
    port = 12205
    client-queue-size = 512
    client-connect-timeout = 5000
    client-reconnect-delay = 1000
    client-tcp-no-delay = true
    client-send-buffer-size = 32768
  }
  nginx-access {
    type = "gelf"
    host = "10.1.1.43"
    port = 12201
    client-queue-size = 512
    client-connect-timeout = 5000
    client-reconnect-delay = 1000
    client-tcp-no-delay = true
    client-send-buffer-size = 32768
  }
  
  nginx-error {
    type = "gelf"
    host = "10.1.1.43"
    port = 12202
    client-queue-size = 512
    client-connect-timeout = 5000
    client-reconnect-delay = 1000
    client-tcp-no-delay = true
    client-send-buffer-size = 32768
  }
    
}

 

graylog-collector啟動方法:


  $ cd graylog-collector-0.4.1/
  $ bin/graylog-collector run -f collector.conf

 

五、流和報警設置

Graylog Streem (graylog 流) 是一個路由消息分類並且實時處理的機制。 你可以自定義規則指導Graylog哪個消息路由到哪個streem

首先需要自定義一個字段即為某一類消息打上標簽,打開graylog web管理頁面,System->Input ,在已建好的 Input 右邊的 More actions->Add static fields , 然后輸入字段名稱和標簽名稱標識。完成后則可以在Search時通過左下角的 Fields 查看到新建的字段值,勾選該字段后,則新建字段后的所有流消息都會被打上相應的標簽。

 

之后可以開始創建streem和相應的匹配規則,一般情況下 alert condition 采用 Message count condition

 

Sender

graylog@wjoyxt.org

E-Mail Subject

${stream.title}: 日志警報  來自 Graylog 日志系統

E-Mail Body(optional)

##########
警報描述: ${check_result.resultDescription}
日期: ${check_result.triggeredAt}
Stream ID: ${stream.id}
Stream title: ${stream.title}
Stream 描述: ${stream.description}

${if stream_url}Stream URL: ${stream_url}${end}

##########
${if backlog}Last messages accounting for this alert:
${foreach backlog message}${message}

${end}${else}以上地址僅允許北京辦公地點訪問
以上數據由某某機房Graylog日志系統提供
${end}

 

##########

警報標題描述:       ${stream.description}

警報觸發信息描述:${check_result.resultDescription}

${if stream_url}報警日志信息查詢鏈接:${stream_url}${end}

##########
${if backlog}Last messages accounting for this alert:
${foreach backlog message}${message}

${end}${else}
${end}

 

 

 

 

配置rsyslog遠程發送日志:

vim /etc/rsyslog.conf

*.* @10.1.1.33:5140

@@為tcp協議傳輸  @為udp協議

 

FAQ:

關於登陸web界面后圖形上顯示的時間不對的問題,graylog-server默認使用UTC時區時間,目前解決方法只能在新建用戶時,選擇TimeZone: Shanghai。然后用新用戶代替admin用戶。

graylog.conf

is_master = true
node_id_file = /etc/graylog/server/node-id
password_secret = iLxgl1vsC6iA4MJXJbMQ5mAKAPh5qIoCEdHeQdEVQFJ8wZz8XRznS7CVgSTGgS2nc0qwPr65gxop3GcfvajprKa3zqs44Hc8
root_password_sha2 = 72d7c50d4e1e267df628ec2ee9eabee0f31cb17a29f3eb41e0a04ede5134c37f
root_email = "wjoyxt@wjoyxt.com"
root_timezone = +08:00
plugin_dir = /usr/share/graylog-server/plugin
rest_listen_uri = http://0.0.0.0:9000/api/
web_listen_uri = http://0.0.0.0:9000/
rotation_strategy = count
elasticsearch_max_docs_per_index = 20000000
elasticsearch_max_number_of_indices = 20
retention_strategy = delete
elasticsearch_shards = 3
elasticsearch_replicas = 0
elasticsearch_index_prefix = graylog
allow_leading_wildcard_searches = false
allow_highlighting = false
elasticsearch_cluster_name = es-0nqimfw0
elasticsearch_hosts = http://192.168.0.250:9200, http://192.168.0.251:9200, http://192.168.0.252:9200
elasticsearch_network_host = 192.168.0.200
elasticsearch_network_bind_host = 192.168.0.200
elasticsearch_analyzer = standard
output_batch_size = 500
output_flush_interval = 1
output_fault_count_threshold = 5
output_fault_penalty_seconds = 30
processbuffer_processors = 5
outputbuffer_processors = 3
processor_wait_strategy = blocking
ring_size = 65536
inputbuffer_ring_size = 65536
inputbuffer_processors = 2
inputbuffer_wait_strategy = blocking
message_journal_enabled = true
message_journal_dir = /var/lib/graylog-server/journal
lb_recognition_period_seconds = 3
mongodb_uri = mongodb://localhost/graylog
mongodb_max_connections = 1000
mongodb_threads_allowed_to_block_multiplier = 5
transport_email_enabled = true
transport_email_hostname = smtp.126.com
transport_email_port = 465
transport_email_use_auth = true
transport_email_use_tls = true
transport_email_use_ssl = true
transport_email_auth_username = wjoyxt
transport_email_auth_password = wjoyxt888
transport_email_subject_prefix = [graylog2]
transport_email_from_email = wjoyxt@wjoyxt.com
transport_email_web_interface_url = http://graylog.wjoyxt.com:9000
content_packs_dir = /usr/share/graylog-server/contentpacks
content_packs_auto_load = grok-patterns.json
proxied_requests_thread_pool_size = 32
 
View Code

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM