kibana是node開發的。
1.下載安裝
0.官網步驟如下
1. 下載
也是在官網下載kibana,例如我下載的是:(kibana是nodejs寫的,依賴比較多,所以解壓縮會比較慢)
2. 解壓安裝
解壓之后修改config/kibana.yml中elasticsearch.hosts的地址,默認是http://localhost:9200,所以不用修改也可以。
3.啟動
執行 bin/kibana.bat,啟動后日志如下:
log [14:32:25.598] [info][server][Kibana][http] http server running at http://localhost:5601
4. 訪問kibana首頁
http://localhost:5601/app/kibana kibana會做一些默認的初始化工作。
常用功能如下:
Discover: 數據查看以及搜索
Visualize: 數據可視化制作
Dashboard:儀表盤制作
Devtools: 開發者工具
Management:配置
5.kibana配置詳解
主要是config/kibana.yml,注意配置項有:
server.host/server.port 訪問kibana的地址和端口。如果需要開啟外網訪問需要更改該地址。默認是localhost:5601
elasticsearch.hosts: ["http://localhost:9200"] kibana放我的es的地址,默認是本地的9200端口
補充:今天我在另一個機子啟動kibana的時候報錯如下:
"warning","migrations","pid":6181,"message":"Another Kibana instance appears to be migrating the index. Waiting for that migration to complete. If no other Kibana instance is attempting migrations, you can get past this message by deleting index .kibana_index_1 and restarting Kibana.
解決辦法:
(1)停止kibana
(2)查看kibana相關索引
curl http://localhost:9200/.kibana*
(3)刪除索引再次查看
C:\Users\Administrator>curl -XDELETE http://localhost:9200/.kibana* {"acknowledged":true} C:\Users\Administrator>curl http://localhost:9200/.kibana* {}
(4)再次啟動kibana即可
補充:kibana簡單使用
0.先到kibana的設置建立Index pttern
1.表格展示 Discover 面板
該面板選擇對應的pattern之后,默認是展示數據。點擊左邊選擇對應的列即可展示成表格,如下:
2. 建立餅圖-Visualize面板
(1)選擇pie餅圖,之后選擇相應的index pattern
(2)例如選擇查看每個thread對應的文檔數量,相當於按thread分組后查詢總數(參數設置好之后點擊save保存即可)
2.ElasticSearch術語介紹與kibana入門
Elasticsearch是RestFul風格的api,通過http的請求形式發送請求,對Elasticsearch進行操作。
查詢,請求方式應該是get。
刪除,請求方式應該是delete。
添加,請求方式應該是put/post。
修改,請求方式應該是put/post。
RESTFul接口url的格式:http://ip:port/<index>/<type>/<[id]>。其中index、type是必須提供的。id是可以選擇的,不提供es會自動生成,index、type將信息進行分層,利於管理。
1.Elastic術語介紹
Document:文檔數據,就是我們存在es中的一條數據
Index:索引。可以理解為mysql中的一個DB,一個數據庫。所有的document都是存在一個具體的index中。
Type:Index下的數據類型。可以理解為mysql的一個表。 ES默認的是_doc。目前的是一個Index一個Type。(今天看公司用的應該是5.x版本的,一個index是支持多個type的)
Field:字段,文檔的屬性。可以理解為mysql表中的列。
Query DSL:ES查詢語法
2.ES中進行CRUD
這里我們使用Kibana的Devtools工具進行操作。
1.創建一個文檔
POST /accounts/person/1 { "name": "zhi", "lastName": "qiao", "job": "enginee" }
返回結果如下:
#! Deprecation: [types removal] Specifying types in document index requests is deprecated, use the typeless endpoints instead (/{index}/_doc/{id}, /{index}/_doc, or /{index}/_create/{id}). { "_index" : "accounts", "_type" : "person", "_id" : "1", "_version" : 1, "result" : "created", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 0, "_primary_term" : 1 }
accounts就是索引,類型是person,插入的id是1,版本號是1.
2.讀取文檔
GET accounts/person/1
返回的結果如下:
#! Deprecation: [types removal] Specifying types in document get requests is deprecated, use the /{index}/_doc/{id} endpoint instead. { "_index" : "accounts", "_type" : "person", "_id" : "1", "_version" : 2, "_seq_no" : 1, "_primary_term" : 1, "found" : true, "_source" : { "name" : "zhi", "lastName" : "qiao", "job" : "enginee" } }
3.更新文檔:(將上面新建文檔的job字段更新一下)
POST /accounts/person/1/_update { "doc": { "job": "software enginee" } }
返回結果:
#! Deprecation: [types removal] Specifying types in document update requests is deprecated, use the endpoint /{index}/_update/{id} instead. { "_index" : "accounts", "_type" : "person", "_id" : "1", "_version" : 3, "result" : "updated", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 2, "_primary_term" : 1 }
4.再次查看文檔:
GET accounts/person/1
結果:
#! Deprecation: [types removal] Specifying types in document get requests is deprecated, use the /{index}/_doc/{id} endpoint instead. { "_index" : "accounts", "_type" : "person", "_id" : "1", "_version" : 3, "_seq_no" : 2, "_primary_term" : 1, "found" : true, "_source" : { "name" : "zhi", "lastName" : "qiao", "job" : "software enginee" } }
5.刪除文檔
DELETE accounts/person/1
結果:
#! Deprecation: [types removal] Specifying types in document index requests is deprecated, use the /{index}/_doc/{id} endpoint instead. { "_index" : "accounts", "_type" : "person", "_id" : "1", "_version" : 4, "result" : "deleted", "_shards" : { "total" : 2, "successful" : 1, "failed" : 0 }, "_seq_no" : 3, "_primary_term" : 1 }
6.再次查看
GET accounts/person/1
結果:
#! Deprecation: [types removal] Specifying types in document get requests is deprecated, use the /{index}/_doc/{id} endpoint instead. { "_index" : "accounts", "_type" : "person", "_id" : "1", "found" : false }
3.Elastic Query
首先准備兩條數據:
POST /accounts/person/1 { "name": "zhi", "lastName": "qiao", "job": "enginee" } POST /accounts/person/2 { "name": "zhi2", "lastName": "qiao2", "job": "student" }
1.Query string:按關鍵字查詢
GET accounts/person/_search?q=student
結果:
#! Deprecation: [types removal] Specifying types in search requests is deprecated. { "took" : 1557, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 1, "relation" : "eq" }, "max_score" : 1.0925692, "hits" : [ { "_index" : "accounts", "_type" : "person", "_id" : "2", "_score" : 1.0925692, "_source" : { "name" : "zhi2", "lastName" : "qiao2", "job" : "student" } } ] } }
查詢不存在的關鍵字:
GET accounts/person/_search?q=teacher
返回結果:
#! Deprecation: [types removal] Specifying types in search requests is deprecated. { "took" : 4, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 0, "relation" : "eq" }, "max_score" : null, "hits" : [ ] } }
2.Query DSL: 以JSON形式拼接查詢語言。以httpbody發送請求
GET accounts/person/_search { "query": { "term": { "job": { "value": "student" } } } }
結果:
#! Deprecation: [types removal] Specifying types in search requests is deprecated. { "took" : 6, "timed_out" : false, "_shards" : { "total" : 1, "successful" : 1, "skipped" : 0, "failed" : 0 }, "hits" : { "total" : { "value" : 1, "relation" : "eq" }, "max_score" : 0.6931471, "hits" : [ { "_index" : "accounts", "_type" : "person", "_id" : "2", "_score" : 0.6931471, "_source" : { "name" : "zhi2", "lastName" : "qiao2", "job" : "student" } } ] } }
QueryDSL學習地址:https://www.elastic.co/guide/en/elasticsearch/reference/current/index.html
3.ES整合IK中文分詞器
Elasticsearch中,內置了很多分詞器(analyzers),有standard (標准分詞器)、english (英文分詞)和chinese (中文分詞)。其中standard 是一個一個詞(漢字)切分,所以適用范圍廣,但是精准度低;english 對英文更加智能,可以識別單數負數,大小寫,過濾stopwords(例如“the”這個詞)等;chinese 效果很差。
1. 比如使用默認分詞器進行分詞查看
POST /_analyze { "text": "我是一個程序員, I am CXY" }
結果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "<IDEOGRAPHIC>", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "<IDEOGRAPHIC>", "position" : 1 }, { "token" : "一", "start_offset" : 2, "end_offset" : 3, "type" : "<IDEOGRAPHIC>", "position" : 2 }, { "token" : "個", "start_offset" : 3, "end_offset" : 4, "type" : "<IDEOGRAPHIC>", "position" : 3 }, { "token" : "程", "start_offset" : 4, "end_offset" : 5, "type" : "<IDEOGRAPHIC>", "position" : 4 }, { "token" : "序", "start_offset" : 5, "end_offset" : 6, "type" : "<IDEOGRAPHIC>", "position" : 5 }, { "token" : "員", "start_offset" : 6, "end_offset" : 7, "type" : "<IDEOGRAPHIC>", "position" : 6 }, { "token" : "i", "start_offset" : 9, "end_offset" : 10, "type" : "<ALPHANUM>", "position" : 7 }, { "token" : "am", "start_offset" : 11, "end_offset" : 13, "type" : "<ALPHANUM>", "position" : 8 }, { "token" : "cxy", "start_offset" : 14, "end_offset" : 17, "type" : "<ALPHANUM>", "position" : 9 } ] }
2. 整合IK中文分詞器
中文分詞器插件:https://github.com/medcl/elasticsearch-analysis-ik
1. 需要下載對應版本的插件。
2.將下載的zip包解壓至ES_HOME/plugins/ik(ik目錄沒有的話自己新建一個)
3.測試分詞
(1) ik_smart分析器
POST /_analyze { "analyzer":"ik_smart", "text": "我是一個程序員" }
結果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "CN_CHAR", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "CN_CHAR", "position" : 1 }, { "token" : "一個", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 2 }, { "token" : "程序員", "start_offset" : 4, "end_offset" : 7, "type" : "CN_WORD", "position" : 3 } ] }
(2)ik_max_word 分析
POST /_analyze { "analyzer":"ik_max_word", "text": "我是一個程序員" }
結果:
{ "tokens" : [ { "token" : "我", "start_offset" : 0, "end_offset" : 1, "type" : "CN_CHAR", "position" : 0 }, { "token" : "是", "start_offset" : 1, "end_offset" : 2, "type" : "CN_CHAR", "position" : 1 }, { "token" : "一個", "start_offset" : 2, "end_offset" : 4, "type" : "CN_WORD", "position" : 2 }, { "token" : "一", "start_offset" : 2, "end_offset" : 3, "type" : "TYPE_CNUM", "position" : 3 }, { "token" : "個", "start_offset" : 3, "end_offset" : 4, "type" : "COUNT", "position" : 4 }, { "token" : "程序員", "start_offset" : 4, "end_offset" : 7, "type" : "CN_WORD", "position" : 5 }, { "token" : "程序", "start_offset" : 4, "end_offset" : 6, "type" : "CN_WORD", "position" : 6 }, { "token" : "員", "start_offset" : 6, "end_offset" : 7, "type" : "CN_CHAR", "position" : 7 } ] }
注意:
ik_max_word: 會將文本做最細粒度的拆分,比如會將“我是一個程序員”拆分為“我,是,一個,一,個,程序員,程序,員”,會窮盡各種可能的組合;
ik_smart: 會做最粗粒度的拆分,比如會將“我是一個程序員”拆分為“我,是,一個,程序員”
ik_max_word更多的用在做索引的時候,但是在搜索的時候,對於用戶所輸入的query(查詢)詞,我們可能更希望得比較准確的結果,例如,我們搜索“花果山”的時候,更希望是作為一個詞進行查詢,而不是切分為"花",“果”,“山”三個詞進行結果的搜索,因此ik_smart更加常用語對於輸入詞的分析。
4.在創建mapping時,設置IK分詞器,設置analyzer和search_analyzer
PUT /news { "settings": { "number_of_shards": 3, "number_of_replicas": 2 }, "mappings": { "properties": { "id": { "type": "long" }, "title": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "content": { "type": "text", "analyzer": "ik_max_word", "search_analyzer": "ik_smart" }, "description": { "type": "double" } } } }
結果:
{ "acknowledged" : true, "shards_acknowledged" : true, "index" : "news" }
查看字段映射如下:
GET /news/_mapping?pretty=true
結果:
{ "news" : { "mappings" : { "properties" : { "content" : { "type" : "text", "analyzer" : "ik_max_word", "search_analyzer" : "ik_smart" }, "description" : { "type" : "double" }, "id" : { "type" : "long" }, "title" : { "type" : "text", "analyzer" : "ik_max_word", "search_analyzer" : "ik_smart" } } } } }
補充:ES可視化界面elasticsearch-head
參考:https://github.com/mobz/elasticsearch-hea