centos7安裝elasticsearch


本案例測試es版本等環境下載:鏈接: https://pan.baidu.com/s/1txx_TxE-bTYwqQEBKxtMKQ 提取碼: xrrh

官網下載 https://www.elastic.co/cn/downloads/elasticsearch 

一、准備環境

es需要有java環境,安裝Java環境

1.先查看本地是否自帶java環境: yum list installed |grep java 

2.卸載自帶的java(輸入su,輸入root超級管理員的密碼,切換到root用戶模式)

 yum -y remove java-* 

 yum -y remove tzdata-java* 

3,查看java包: yum -y list java* 

安裝java: yum -y install java-11-openjdk* 

4,查找Java安裝路徑

which java

ls -lrt /usr/bin/java(也就是上一步查詢出來的路徑),然后回車

輸入ls -lrt /etc/alternatives/java(也就是上一步查詢出來的路徑),然后回車

從路徑中可以看到在jvm目錄下,輸入cd /usr/lib/jvm,跳轉到jvm的目錄

輸入ls 列出當前目錄下的文件和文件夾

5,配置Java環境變量

輸入vi /etc/profile去編輯環境變量

添加如下:

export JAVA_HOME=/usr/lib/jvm/java-1.8.0
export JRE_HOME=$JAVA_HOME/jre  
export PATH=$PATH:$JAVA_HOME/bin:$JRE_HOME/bin
export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar:$JRE_HOME/lib

保存退出

輸入source /etc/profile,使配置立即生效

7. 檢查Java安裝和配置情況 輸入java -version,然后回車

 

二、目錄結構

 

三、啟動

elasticsearch不允許使用root啟動,因此我們要解決這個問題需要新建一個用戶來啟動elasticsearch

 啟動: [hunter@localhost elasticsearch-7.6.2]$ bin/elasticsearch 

# ---------------------------------- Cluster -----------------------------------

# Use a descriptive name for your cluster:
  
# 集群名稱,用於定義哪些elasticsearch節點屬同一個集群。
cluster.name: bigdata
  
# ------------------------------------ Node ------------------------------------
# 節點名稱,用於唯一標識節點,不可重名
node.name: server3
  
# 1、以下列出了三種集群拓撲模式,如下:
# 如果想讓節點不具備選舉主節點的資格,只用來做數據存儲節點。
node.master: false
node.data: true
  
# 2、如果想讓節點成為主節點,且不存儲任何數據,只作為集群協調者。
node.master: true
node.data: false
  
# 3、如果想讓節點既不成為主節點,又不成為數據節點,那么可將他作為搜索器,從節點中獲取數據,生成搜索結果等
node.master: false
node.data: false
  
# 這個配置限制了單機上可以開啟的ES存儲實例的個數,當我們需要單機多實例,則需要把這個配置賦值2,或者更高。
#node.max_local_storage_nodes: 1
  
# ----------------------------------- Index ------------------------------------
# 設置索引的分片數,默認為5  "number_of_shards" 是索引創建后一次生成的,后續不可更改設置
index.number_of_shards: 5
  
# 設置索引的副本數,默認為1
index.number_of_replicas: 1
  
# 索引的刷新頻率,默認1秒,太小會造成索引頻繁刷新,新的數據寫入就慢了。(此參數的設置需要在寫入性能和實時搜索中取平衡)通常在ELK場景中需要將值調大一些比如60s,在有_template的情況下,需要設置在應用的_template中才生效。 
index.refresh_interval: 120s
  
# ----------------------------------- Paths ------------------------------------
# 數據存儲路徑,可以設置多個路徑用逗號分隔,有助於提高IO。 # path.data: /home/path1,/home/path2
path.data: /home/elk/server3_data
  
# 日志文件路徑
path.logs: /var/log/elasticsearch
  
# 臨時文件的路徑
path.work: /path/to/work
  
# ----------------------------------- Memory -------------------------------------
# 確保 ES_MIN_MEM 和 ES_MAX_MEM 環境變量設置為相同的值,以及機器有足夠的內存分配給Elasticsearch
# 注意:內存也不是越大越好,一般64位機器,最大分配內存別才超過32G
  
# 當JVM開始寫入交換空間時(swapping)ElasticSearch性能會低下,你應該保證它不會寫入交換空間
# 設置這個屬性為true來鎖定內存,同時也要允許elasticsearch的進程可以鎖住內存,linux下可以通過 `ulimit -l unlimited` 命令
  
bootstrap.mlockall: true
  
# 節點用於 fielddata 的最大內存,如果 fielddata 
# 達到該閾值,就會把舊數據交換出去。該參數可以設置百分比或者絕對值。默認設置是不限制,所以強烈建議設置該值,比如 10%。
indices.fielddata.cache.size: 50mb
  
# indices.fielddata.cache.expire  這個參數絕對絕對不要設置!
  
indices.breaker.fielddata.limit 默認值是JVM堆內存的60%,注意為了讓設置正常生效,一定要確保 indices.breaker.fielddata.limit 的值
大於 indices.fielddata.cache.size 的值。否則的話,fielddata 大小一到 limit 閾值就報錯,就永遠道不了 size 閾值,無法觸發對舊數據的交換任務了。
  
#------------------------------------ Network And HTTP -----------------------------
# 設置綁定的ip地址,可以是ipv4或ipv6的,默認為0.0.0.0
network.bind_host: 192.168.0.1
  
# 設置其它節點和該節點通信的ip地址,如果不設置它會自動設置,值必須是個真實的ip地址
network.publish_host: 192.168.0.1
  
# 同時設置bind_host和publish_host上面兩個參數
network.host: 192.168.0.1
  
# 設置集群中節點間通信的tcp端口,默認是9300
transport.tcp.port: 9300
  
# 設置是否壓縮tcp傳輸時的數據,默認為false,不壓縮
transport.tcp.compress: true
  
# 設置對外服務的http端口,默認為9200
http.port: 9200
  
# 設置請求內容的最大容量,默認100mb
http.max_content_length: 100mb
  
# ------------------------------------ Translog -------------------------------------
#當事務日志累積到多少條數據后flush一次。
index.translog.flush_threshold_ops: 50000
  
# --------------------------------- Discovery --------------------------------------
# 這個參數決定了要選舉一個Master至少需要多少個節點,默認值是1,推薦設置為 N/2 + 1,N是集群中節點的數量,這樣可以有效避免腦裂
discovery.zen.minimum_master_nodes: 1
  
# 在java里面GC是很常見的,但在GC時間比較長的時候。在默認配置下,節點會頻繁失聯。節點的失聯又會導致數據頻繁重傳,甚至會導致整個集群基本不可用。
  
# discovery參數是用來做集群之間節點通信的,默認超時時間是比較小的。我們把參數適當調大,避免集群GC時間較長導致節點的丟失、失聯。
discovery.zen.ping.timeout: 200s
discovery.zen.fd.ping_timeout: 200s
discovery.zen.fd.ping.interval: 30s
discovery.zen.fd.ping.retries: 6
  
# 設置集群中節點的探測列表,新加入集群的節點需要加入列表中才能被探測到。 
discovery.zen.ping.unicast.hosts: ["10.10.1.244:9300",]
  
# 是否打開廣播自動發現節點,默認為true
discovery.zen.ping.multicast.enabled: false
 
indices.store.throttle.type: merge
indices.store.throttle.max_bytes_per_sec: 100mb

es配置
es配置
修改elasticsearch.yml

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true

啟動es服務 ./elasticsearch -d

修改密碼
$ ./bin/elasticsearch-setup-passwords interactive

You will be prompted to enter passwords as the process progresses.
Please confirm that you would like to continue [y/N] y 
Enter password for [elastic]: 
Reenter password for [elastic]: 
Enter password for [apm_system]: 
Reenter password for [apm_system]: 
Enter password for [kibana]: 
Reenter password for [kibana]: 
Enter password for [logstash_system]: 
Reenter password for [logstash_system]: 
Enter password for [beats_system]: 
Reenter password for [beats_system]: 
Enter password for [remote_monitoring_user]: 
Reenter password for [remote_monitoring_user]: 

修改密碼: curl -XPUT -u elastic:changeme 'http://localhost:9200/_xpack/security/user/elastic/_password' -d '{ "password" : "your_passwd" }'
配置密碼
# ======================== Elasticsearch Configuration =========================
#
# NOTE: Elasticsearch comes with reasonable defaults for most settings.
#       Before you set out to tweak and tune the configuration, make sure you
#       understand what are you trying to accomplish and the consequences.
#
# The primary way of configuring a node is via this file. This template lists
# the most important settings you may want to configure for a production cluster.
#
# Please consult the documentation for further information on configuration options:
# https://www.elastic.co/guide/en/elasticsearch/reference/index.html
#
# ---------------------------------- Cluster -----------------------------------
#
# Use a descriptive name for your cluster:
#
cluster.name: demo
cluster.initial_master_nodes: ["node1"]
#
# ------------------------------------ Node ------------------------------------
#
# Use a descriptive name for the node:
#
node.name: node1
#
# Add custom attributes to the node:
#
#node.attr.rack: r1
#
# ----------------------------------- Paths ------------------------------------
#
# Path to directory where to store the data (separate multiple locations by comma):
#
#path.data: /path/to/data
#
# Path to log files:
#
#path.logs: /path/to/logs
#
# ----------------------------------- Memory -----------------------------------
#
# Lock the memory on startup:
#
#bootstrap.memory_lock: true
#
# Make sure that the heap size is set to about half the memory available
# on the system and that the owner of the process is allowed to use this
# limit.
#
# Elasticsearch performs poorly when the system is swapping the memory.
#
# ---------------------------------- Network -----------------------------------
#
# Set the bind address to a specific IP (IPv4 or IPv6):
#
network.host: 192.168.0.150
#
# Set a custom port for HTTP:
#
http.port: 9200
#
# For more information, consult the network module documentation.
#
# --------------------------------- Discovery ----------------------------------
#
# Pass an initial list of hosts to perform discovery when this node is started:
# The default list of hosts is ["127.0.0.1", "[::1]"]
#
#discovery.seed_hosts: ["host1", "host2"]
#
# Bootstrap the cluster using an initial set of master-eligible nodes:
#
#cluster.initial_master_nodes: ["node-1", "node-2"]
#
# For more information, consult the discovery and cluster formation module documentation.
#
# ---------------------------------- Gateway -----------------------------------
#
# Block initial recovery after a full cluster restart until N nodes are started:
#
#gateway.recover_after_nodes: 3
#
# For more information, consult the gateway module documentation.
#
# ---------------------------------- Various -----------------------------------
#
# Require explicit names when deleting indices:
#
#action.destructive_requires_name: true
#
#

http.cors.enabled: true
http.cors.allow-origin: "*"
http.cors.allow-headers: Authorization
xpack.security.enabled: true
xpack.security.transport.ssl.enabled: true
elasticsearch.yml

 

 

四、安裝插件

 

 查詢安裝的插件: [hunter@localhost elasticsearch-7.6.2]$ bin/elasticsearch-plugin list 

 安裝分詞插件: bin/elasticsearch-plugin install analysis-icu 

 url查詢安裝的插件:http://localhost:9200/_cat/plugins

 

五、部署多個實例

bin/elasticsearch -E node.name=node0 -E cluster.name=zhang -E path.data=node0_date -d 
bin/elasticsearch -E node.name=node1 -E cluster.name=zhang -E path.data=node1_date -d 
bin/elasticsearch -E node.name=node2 -E cluster.name=zhang -E path.data=node2_date -d 
bin/elasticsearch -E node.name=node3 -E cluster.name=zhang -E path.data=node3_date -d

刪除進程:

ps -ef |grep elasticsearch 

kill 1234  

 

六、安裝失敗場景解決

1,Exception in thread "main" java.nio.file.AccessDeniedException

原因:當前用戶沒有執行權限 
解決方法: chown linux用戶名 elasticsearch安裝目錄 -R   chown hunter elasticsearch-7.6.2 -R 

2,Exception in thread "main" org.elasticsearch.bootstrap.BootstrapException: java.nio.file.FileAlready

 解決方式,刪除文件 rm -rf elasticsearch.keystore.tmp

3, hunter 不在 sudoers 文件中。此事將被報告

 

 

 

七、安裝Kibana

1,下載地址:https://www.elastic.co/cn/downloads/kibana

2,啟動: [hunter@localhost kibana-7.6.2-linux-x86_64]$ bin/kibana 

啟動之前需要給hunter目錄權限

修改host:

 

 

3,插件安裝

 

4,導入測試數據 

# Kibana is served by a back end server. This setting specifies the port to use.
server.port: 5601

# Specifies the address to which the Kibana server will bind. IP addresses and host names are both valid values.
# The default is 'localhost', which usually means remote machines will not be able to connect.
# To allow connections from remote users, set this parameter to a non-loopback address.
server.host: "192.168.0.150"

# Enables you to specify a path to mount Kibana at if you are running behind a proxy.
# Use the `server.rewriteBasePath` setting to tell Kibana if it should remove the basePath
# from requests it receives, and to prevent a deprecation warning at startup.
# This setting cannot end in a slash.
#server.basePath: ""

# Specifies whether Kibana should rewrite requests that are prefixed with
# `server.basePath` or require that they are rewritten by your reverse proxy.
# This setting was effectively always `false` before Kibana 6.3 and will
# default to `true` starting in Kibana 7.0.
#server.rewriteBasePath: false

# The maximum payload size in bytes for incoming server requests.
#server.maxPayloadBytes: 1048576

# The Kibana server's name.  This is used for display purposes.
#server.name: "your-hostname"

# The URLs of the Elasticsearch instances to use for all your queries.
elasticsearch.hosts: ["http://192.168.0.150:9200"]

# When this setting's value is true Kibana uses the hostname specified in the server.host
# setting. When the value of this setting is false, Kibana uses the hostname of the host
# that connects to this Kibana instance.
#elasticsearch.preserveHost: true

# Kibana uses an index in Elasticsearch to store saved searches, visualizations and
# dashboards. Kibana creates a new index if the index doesn't already exist.
#kibana.index: ".kibana"

# The default application to load.
#kibana.defaultAppId: "home"

# If your Elasticsearch is protected with basic authentication, these settings provide
# the username and password that the Kibana server uses to perform maintenance on the Kibana
# index at startup. Your Kibana users still need to authenticate with Elasticsearch, which
# is proxied through the Kibana server.
elasticsearch.username: "kibana"
elasticsearch.password: "123456"

# Enables SSL and paths to the PEM-format SSL certificate and SSL key files, respectively.
# These settings enable SSL for outgoing requests from the Kibana server to the browser.
#server.ssl.enabled: false
#server.ssl.certificate: /path/to/your/server.crt
#server.ssl.key: /path/to/your/server.key

# Optional settings that provide the paths to the PEM-format SSL certificate and key files.
# These files are used to verify the identity of Kibana to Elasticsearch and are required when
# xpack.security.http.ssl.client_authentication in Elasticsearch is set to required.
#elasticsearch.ssl.certificate: /path/to/your/client.crt
#elasticsearch.ssl.key: /path/to/your/client.key

# Optional setting that enables you to specify a path to the PEM file for the certificate
# authority for your Elasticsearch instance.
#elasticsearch.ssl.certificateAuthorities: [ "/path/to/your/CA.pem" ]

# To disregard the validity of SSL certificates, change this setting's value to 'none'.
#elasticsearch.ssl.verificationMode: full

# Time in milliseconds to wait for Elasticsearch to respond to pings. Defaults to the value of
# the elasticsearch.requestTimeout setting.
#elasticsearch.pingTimeout: 1500

# Time in milliseconds to wait for responses from the back end or Elasticsearch. This value
# must be a positive integer.
#elasticsearch.requestTimeout: 30000

# List of Kibana client-side headers to send to Elasticsearch. To send *no* client-side
# headers, set this value to [] (an empty list).
#elasticsearch.requestHeadersWhitelist: [ authorization ]

# Header names and values that are sent to Elasticsearch. Any custom headers cannot be overwritten
# by client-side headers, regardless of the elasticsearch.requestHeadersWhitelist configuration.
#elasticsearch.customHeaders: {}

# Time in milliseconds for Elasticsearch to wait for responses from shards. Set to 0 to disable.
#elasticsearch.shardTimeout: 30000

# Time in milliseconds to wait for Elasticsearch at Kibana startup before retrying.
#elasticsearch.startupTimeout: 5000

# Logs queries sent to Elasticsearch. Requires logging.verbose set to true.
#elasticsearch.logQueries: false

# Specifies the path where Kibana creates the process ID file.
#pid.file: /var/run/kibana.pid

# Enables you specify a file where Kibana stores log output.
#logging.dest: stdout

# Set the value of this setting to true to suppress all logging output.
#logging.silent: false

# Set the value of this setting to true to suppress all logging output other than error messages.
#logging.quiet: false

# Set the value of this setting to true to log all events, including system usage information
# and all requests.
#logging.verbose: false

# Set the interval in milliseconds to sample system and process performance
# metrics. Minimum is 100ms. Defaults to 5000.
#ops.interval: 5000

# Specifies locale to be used for all localizable strings, dates and number formats.
# Supported languages are the following: English - en , by default , Chinese - zh-CN .
#i18n.locale: "en"
kibana.yml

 

 

八、安裝logstash

1,下載地址:https://www.elastic.co/cn/downloads/logstash

2,導入數據

①配置文件:https://github.com/geektime-geekbang/geektime-ELK/tree/master/part-1/2.4-Logstash%E5%AE%89%E8%A3%85%E4%B8%8E%E5%AF%BC%E5%85%A5%E6%95%B0%E6%8D%AE/movielens

②數據文件:https://github.com/geektime-geekbang/geektime-ELK/tree/master/part-1/2.4-Logstash%E5%AE%89%E8%A3%85%E4%B8%8E%E5%AF%BC%E5%85%A5%E6%95%B0%E6%8D%AE/movielens/ml-latest-small

3, 執行: [hunter@localhost bin]$ sudo ./logstash -f logstash.conf 

4,Filter Plugin-Mutate

Convert 類型轉換

Gsub 字符串轉換

Split/Join/Merge 字符串切割,數組合並字符串,數組合並數組

Rename 字段重命名

Update/Replace 字段內容更新替換

Remove_field 字段刪除

5,定期增量同步mysql表數據

input {
  jdbc {
    jdbc_driver_class => "com.mysql.jdbc.Driver"
    jdbc_connection_string => "jdbc:mysql://localhost:3306/db_example"
    jdbc_user => root
    jdbc_password => ymruan123
    #啟用追蹤,如果為true,則需要指定tracking_column
    use_column_value => true
    #指定追蹤的字段,
    tracking_column => "last_updated"
    #追蹤字段的類型,目前只有數字(numeric)和時間類型(timestamp),默認是數字類型
    tracking_column_type => "numeric"
    #記錄最后一次運行的結果
    record_last_run => true
    #上面運行結果的保存位置
    last_run_metadata_path => "jdbc-position.txt"
    statement => "SELECT * FROM user where last_updated >:sql_last_value;"
    schedule => " * * * * * *"
  }
}
output {
  elasticsearch {
    document_id => "%{id}"
    document_type => "_doc"
    index => "users"
    hosts => ["http://localhost:9200"]
  }
  stdout{
    codec => rubydebug
  }
}

 

 

 

九、安裝cerebro

1,下載地址:https://github.com/lmenezes/cerebro/releases

2,配置es地址: [root@localhost ~]# vim /usr/cerebro-0.8.5/conf/application.conf 

 

3,啟動: [root@localhost ~]# vim /usr/cerebro-0.8.5/bin/cerebro  

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM