ELK->logstash output jdbc插件


需求描述:
將filebeat上傳的iis日志分別存放至ElasticSearch和oracle數據庫
 
作業步驟:
1.准備logstash相關環境(tar.gz已配置完成)
2.安裝logstash-output-jdbc插件,可在線安裝或離線安裝包
3.安裝oracle jdbc驅動
4.配置logstash 
5.啟動logstash測試
 
【准備logstash環境】
Logstash採集和過濾,版本6.6.0: https://www.elastic.co/cn/downloads/past-releases/logstash-6-6-0    只支持Java8
/home/elastic/.bash_profile
# .bash_profile
 
ELASTIC_HOME=/u01/elasticsearch-6.6.0; export ELASTIC_HOME
KIBANA_HOME=/u01/kibana-6.6.0; export KIBANA_HOME
LOGSTASH_HOME=/u01/logstash-6.6.0; export LOGSTASH_HOME
JAVA_HOME=/usr/local/jdk1.8.0_202; export JAVA_HOME
 
CLASSPATH=$JAVA_HOME/lib:$JAVA_HOME/jre:$JAVA_HOME/jlib; export CLASSPATH
 
PATH=$ELASTIC_HOME/bin:$KIBANA_HOME/bin:$LOGSTASH_HOME/bin:$JAVA_HOME/bin:$PATH:$HOME/bin
 
 
export PATH
 
安裝ruby相關組件
 【安裝logstash-output-jdbc插件 】
1.如安裝logstash這台主機可以連接外面,可以直接安裝jdbc插件
logstash-plugin   install logstash-output-jdbc 
 
2.在已安裝插件的主機進行打包
 
#如存在代理則設定Proxy地址,因prepare-offline-pack 時會 連接外部網絡

 

export http_proxy=http://x.x.x.x:xx/
export https_proxy=http://x.x.x.x:xx/
 #打包插件
[elastic@t-12c-01 logstash-6.6.0]$ logstash-plugin prepare-offline-pack --overwrite --output logstash-output-jdbc.zip logstash-output-jdbc
WARNING: A maven settings file already exist at /home/elastic/.m2/settings.xml, please review the content to make sure it include your proxies configuration.
Offline package created at: logstash-output-jdbc.zip
You can install it with this command `bin/logstash-plugin install file:///u01/logstash-6.6.0/logstash-output-jdbc.zip`
 
#擴展:查看打包的插件壓縮包

 

其實就是./vendor/bundle/jruby/2.3.0/cache/ logstash-output-jdbc-5.4.0.gem 這個文件和一個依賴插件包 logstash-codec-plain-3.0.6.gem
 
#網上下載者兩個包后進行手動生成離線安裝包

 

#創建mkdir -p logstash/dependencies 目錄
 
logstash-output-jdbc-5.4.0.gem 放入logstash
logstash-codec-plain-3.0.6.gem 放入logstash/dependencies
 
[elastic@t-12c-01 soft]$ find logstash
logstash
logstash/dependencies
logstash/dependencies/logstash-codec-plain-3.0.6.gem
logstash/logstash-output-jdbc-5.4.0.gem
 
#壓縮logstash文件夾,測試zip  需加-q -r后續才能安裝成功,不然報 reason: The pack must contains at least one plugin, message: The pack must contains at least one plugin( 因為需要子資料夾也要壓縮)
 
zip -q -r logstash-output-jdbc.zip logstash
 
#擴展zip壓縮
linux zip命令參數列表:
       -a 將文件轉成ASCII模式
       -F 嘗試修復損壞的壓縮文件
       -h 顯示幫助界面
       -m 將文件壓縮之后,刪除源文件
       -n 特定字符串 不壓縮具有特定字尾字符串的文件
       -o 將壓縮文件內的所有文件的最新變動時間設為壓縮時候的時間
       -q 安靜模式,在壓縮的時候不顯示指令的執行過程
       -r 將指定的目錄下的所有子目錄以及文件一起處理
       -S 包含系統文件和隱含文件(S是大寫)
       -t 日期 把壓縮文件的最后修改日期設為指定的日期,日期格式為mmddyyyy
#無法聯網的主機進行安裝
[logstash@xxxxx logstash-6.6.0]$ bin/logstash-plugin install file:///data02/soft/logstash-output-jdbc.zip
Installing file: /data02/soft/logstash-output-jdbc.zip
Install successful
 
[logstash@xxxxx logstash-6.6.0]$ bin/logstash-plugin list|grep jdbc
logstash-filter-jdbc_static
logstash-filter-jdbc_streaming
logstash-input-jdbc
logstash-output-jdbc
 
[logstash@xxxxx logstash-6.6.0]$ cat Gemfile|grep jdbc
gem "logstash-filter-jdbc_static"
gem "logstash-filter-jdbc_streaming"
gem "logstash-input-jdbc"
gem "logstash-output-jdbc", "= 5.4.0"
3.如本機無法連接外網,可自行下載源碼包 ,此方案經過測試 失敗,只是修改了Gemfile文件無實際安裝

 

 

 

 

[elastic@t-12c-01 logstash-6.6.0]$ logstash-plugin list|grep jdbc

logstash-filter-jdbc_static

logstash-filter-jdbc_streaming

logstash-input-jdbc

 

#解壓離線安裝包

unzip logstash-output-jdbc-master.zip

 

#編輯logstash下的Gemfile文件,在其新增

安裝插件的名稱                                            離線安裝包的絕對路徑

gem "logstash-output-jdbc", :path => "/u01/soft/logstash-output-jdbc-master"

#安裝插件

[elastic@t-12c-01 logstash-6.6.0]$ bin/logstash-plugin install --no-verify

Installing...

Installation successful

#查詢已安裝的jdbc插件

[elastic@t-12c-01 logstash-6.6.0]$ bin/logstash-plugin list|grep jdbc

logstash-filter-jdbc_static

logstash-filter-jdbc_streaming

logstash-input-jdbc

logstash-output-jdbc

【安裝jdbc驅動】

  下載jdbc驅動

https://www.oracle.com/database/technologies/jdbcdriver-ucp-downloads.html

將ojdbc6.jar放置於logstash目錄下mkdir -p vendor/jar/jdbc

ls -l /u01/logstash-6.6.0/vendor/jar/jdbc/ojdbc6.jar

【配置logstash】 

 jdbc {            connection_string => "jdbc:oracle:thin:dbuser/dbpassword@192.168.56.1:1521:SID"

        statement => [ "INSERT INTO TAB(s_ip)  VALUES(?)","s-ip"]
}
 [elastic@t-12c-01 logstash-6.6.0]$ cat  /u01/logstash-6.6.0/config/from_beat.conf
input {
  beats {
    port => 5044
  }
}
 
filter {
  grok {
    # check that fields match your IIS log settings
    match => ["message", "%{TIMESTAMP_ISO8601:log_timestamp} (%{IPORHOST:s-ip}|-) (%{WORD:cs-method}|-) %{NOTSPACE:cs-uri-stem} %{NOTSPACE:cs-uri-query} (%{NUMBER:s-port}|-) %{NOTSPACE:cs-username} (%{IPORHOST:c-ip}|-) %{NOTSPACE:cs-useragent} (%NUMBER:sc-status}|-) (%{NUMBER:sc-substatus}|-) (%{NUMBER:sc-win32-status}|-) (%{NUMBER:time-taken}|-)"]
 
  }
    date {
    match => [ "log_timestamp", "YYYY-MM-dd HH:mm:ss" ]
      timezone => "+00:00"
  }
mutate {
    add_field => { "logstash_host" => "%{[host][name]}"
    "w3svc_path" => "%{[log][file][path]}"
     }
 
  }
  geoip {
        source => "c-ip"
        target => "geoip"
        database => "/u01/logstash-6.6.0/geolite2-city-mirror-master/GeoLite2-City_20191029/GeoLite2-City.mmdb"
        add_field => [ "[geoip][coordinates]", "%{[geoip][longitude]}" ]
        add_field => [ "[geoip][coordinates]", "%{[geoip][latitude]}" ]
        }
        mutate {
        convert => [ "[geoip][coordinates]", "float" ]
        }
}
 
output {
 
jdbc {
        connection_string => "jdbc:oracle:thin:dbuser/dbpassword@192.168.56.1:1521:SID"
        statement => [ "INSERT INTO TABE(SERVER_NAME,DATE_TIME,S_IP,CS_METHOD,CS_URI_STEM,CS_URI_QUERY,S_PORT,CS_USRNAME,C_IP,CS_USER_AGENT,SC_STATUS,SC_SUBSTATUS,SC_WIN32_STATUS,TIME_TAKEN,W3SVC_PATH)  VALUES(?,TO_DATE(?,'YYYY-MM-DD HH24:MI:SS'),?,?,?,?,?,?,?,?,?,?,?,?,?)","logstash_host","log_timestamp","s-ip","cs-method","cs-uri-stem","cs-uri-query","s-port","cs-username","c-ip","cs-useragent","sc-status","sc-substatus","sc-win32-status","time-taken","w3svc_path"]
}
 
elasticsearch {
    hosts => [" http://localhost:9200"]
    index => "logstash-new-iis_access-%{+YYYY.MM.dd}"
  }
 
 
}
【測試logstash】
 nohup logstash -f config/from_beat.conf &
Sending Logstash logs to /u01/logstash-6.6.0/logs which is now configured via log4j2.properties
[2020-06-11T12:00:37,196][WARN ][logstash.config.source.multilocal] Ignoring the 'pipelines.yml' file because modules or command line options are specified
[2020-06-11T12:00:37,247][INFO ][logstash.runner          ] Starting Logstash {"logstash.version"=>"6.6.0"}
[2020-06-11T12:00:53,634][INFO ][logstash.pipeline        ] Starting pipeline {:pipeline_id=>"main", "pipeline.workers"=>1, "pipeline.batch.size"=>125, "pipeline.batch.delay"=>50}
[2020-06-11T12:00:53,726][INFO ][logstash.outputs.jdbc    ] JDBC - Starting up
[2020-06-11T12:00:53,840][INFO ][com.zaxxer.hikari.HikariDataSource] HikariPool-1 - Starting...
[2020-06-11T12:00:54,360][INFO ][com.zaxxer.hikari.pool.PoolBase] HikariPool-1 - Driver does not support get/set network timeout for connections. (oracle.jdbc.driver.T4CConnection.getNetworkTimeout()I)
[2020-06-11T12:00:54,378][INFO ][com.zaxxer.hikari.HikariDataSource] HikariPool-1 - Start completed.
[2020-06-11T12:00:55,579][INFO ][logstash.outputs.elasticsearch] Elasticsearch pool URLs updated {:changes=>{:removed=>[], :added=>[ http://localhost:9200/]}}
[2020-06-11T12:00:56,076][WARN ][logstash.outputs.elasticsearch] Restored connection to ES instance {:url=>" http://localhost:9200/"}
[2020-06-11T12:00:56,215][INFO ][logstash.outputs.elasticsearch] ES Output version determined {:es_version=>6}
[2020-06-11T12:00:56,227][WARN ][logstash.outputs.elasticsearch] Detected a 6.x and above cluster: the `type` event field won't be used to determine the document _type {:es_version=>6}
[2020-06-11T12:00:56,309][INFO ][logstash.outputs.elasticsearch] New Elasticsearch output {:class=>"LogStash::Outputs::ElasticSearch", :hosts=>[" http://localhost:9200"]}
[2020-06-11T12:00:56,333][INFO ][logstash.outputs.elasticsearch] Using mapping template from {:path=>nil}
[2020-06-11T12:00:56,422][INFO ][logstash.outputs.elasticsearch] Attempting to install template {:manage_template=>{"template"=>"logstash-*", "version"=>60001, "settings"=>{"index.refresh_interval"=>"5s"}, "mappings"=>{"_default_"=>{"dynamic_templates"=>[{"message_field"=>{"path_match"=>"message", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false}}}, {"string_fields"=>{"match"=>"*", "match_mapping_type"=>"string", "mapping"=>{"type"=>"text", "norms"=>false, "fields"=>{"keyword"=>{"type"=>"keyword", "ignore_above"=>256}}}}}], "properties"=>{"@timestamp"=>{"type"=>"date"}, "@version"=>{"type"=>"keyword"}, "geoip"=>{"dynamic"=>true, "properties"=>{"ip"=>{"type"=>"ip"}, "location"=>{"type"=>"geo_point"}, "latitude"=>{"type"=>"half_float"}, "longitude"=>{"type"=>"half_float"}}}}}}}}
[2020-06-11T12:00:56,903][INFO ][logstash.filters.geoip   ] Using geoip database {:path=>"/u01/logstash-6.6.0/geolite2-city-mirror-master/GeoLite2-City_20191029/GeoLite2-City.mmdb"}
[2020-06-11T12:00:57,660][INFO ][logstash.inputs.beats    ] Beats inputs: Starting input listener {:address=>"0.0.0.0:5044"}
[2020-06-11T12:00:57,708][INFO ][logstash.pipeline        ] Pipeline started successfully {:pipeline_id=>"main", :thread=>"#<Thread:0x17f9f072 run>"}
[2020-06-11T12:00:57,863][INFO ][logstash.agent           ] Pipelines running {:count=>1, :running_pipelines=>[:main], :non_running_pipelines=>[]}
[2020-06-11T12:00:57,970][INFO ][org.logstash.beats.Server] Starting server on port: 5044
[2020-06-11T12:00:58,612][INFO ][logstash.agent           ] Successfully started Logstash API endpoint {:port=>9600}  
 
 

 

  

 

 

 

 

 

 
 
 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM