官方參考文檔:
https://www.cloudera.com/documentation/enterprise/5-15-x/topics/configure_cm_repo.html
安裝cdh前如果安裝過其他版本的,記得刪除各種目錄殘留文件,比如/run下面的
如果誤刪了log4j/properties文件,文件內容在這
cmf.root.logger=INFO,CONSOLE cmf.log.dir=. cmf.log.file=cmf-server.log # Define the root logger to the system property "cmf.root.logger". log4j.rootLogger=${cmf.root.logger} # Logging Threshold log4j.threshhold=ALL # Disable most JDBC tracing by default. log4j.logger.org.jdbcdslog=FATAL # Disable overly loud Avro IPC logging log4j.logger.org.apache.avro.ipc.NettyTransceiver=FATAL # Disable overly loud Flume config validation logging log4j.logger.org.apache.flume.conf.FlumeConfiguration=ERROR log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.target=System.err log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} %5p [%t:%c{2}@%L] %m%n log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender log4j.appender.LOGFILE.MaxFileSize=10MB log4j.appender.LOGFILE.MaxBackupIndex=10 log4j.appender.LOGFILE.File=${cmf.log.dir}/${cmf.log.file} log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.LOGFILE.layout.ConversionPattern=%d{ISO8601} %5p [%t:%c{2}@%L] %m%n
1.配置源
wget https://archive.cloudera.com/cm5/ubuntu/xenial/amd64/cm/archive.key sudo apt-key add archive.key cd /etc/apt/sources.list.d/ wget https://archive.cloudera.com/cm5/ubuntu/xenial/amd64/cm/cloudera.list curl -s https://archive.cloudera.com/cm5/ubuntu/trusty/amd64/cm/archive.key| sudo apt-key add - sudo apt-get update
2.安裝Java
3.關防火牆
參考
https://wenku.baidu.com/view/5462a2132f3f5727a5e9856a561252d380eb20e5.html
sudo service ufw stop
4.安裝CDH server
sudo apt-get install cloudera-manager-daemons cloudera-manager-server
或者通過下載的deb包來進行安裝
sudo dpkg -i ./cloudera-manager-daemons_5.16.2-1.cm5162.p0.7~xenial-cm5_all.deb sudo dpkg -i ./cloudera-manager-server_5.16.2-1.cm5162.p0.7~xenial-cm5_all.deb sudo dpkg -i ./cloudera-manager-agent_5.16.2-1.cm5162.p0.7~xenial-cm5_amd64.deb
期間如果依賴不滿足的話
sudo apt-get install -f
4.安裝MySQL JDBC Driver
sudo apt-get install libmysql-java
5.按照官方教程配置數據庫等
CREATE DATABASE scm DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE amon DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE rman DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE hue DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE metastore DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE sentry DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE nav DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE navms DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE DATABASE oozie DEFAULT CHARACTER SET utf8 DEFAULT COLLATE utf8_general_ci; CREATE USER 'scm'; CREATE USER 'amon'; CREATE USER 'rman'; CREATE USER 'hue'; CREATE USER 'hive'; CREATE USER 'sentry'; CREATE USER 'nav'; CREATE USER 'navms'; CREATE USER 'oozie'; GRANT ALL ON scm.* TO 'scm'@'%' IDENTIFIED BY 'scm'; GRANT ALL ON amon.* TO 'amon'@'%' IDENTIFIED BY 'amon'; GRANT ALL ON rman.* TO 'rman'@'%' IDENTIFIED BY 'rman'; GRANT ALL ON hue.* TO 'hue'@'%' IDENTIFIED BY 'hue'; GRANT ALL ON metastore.* TO 'hive'@'%' IDENTIFIED BY 'hive'; GRANT ALL ON sentry.* TO 'sentry'@'%' IDENTIFIED BY 'sentry'; GRANT ALL ON nav.* TO 'nav'@'%' IDENTIFIED BY 'nav'; GRANT ALL ON navms.* TO 'navms'@'%' IDENTIFIED BY 'navms'; GRANT ALL ON oozie.* TO 'oozie'@'%' IDENTIFIED BY 'oozie';
7.初始化數據庫,每執行完要輸入密碼
sudo /usr/share/cmf/schema/scm_prepare_database.sh mysql scm scm
8.啟動
systemctl start cloudera-scm-server systemctl start cloudera-scm-agent
1.如果啟動的時候,/var/log/cloudera-scm-server/cloudera-scm-server.out出現
Caused by: org.springframework.beans.factory.BeanCreationException: Error creating bean with name 'entityManagerFactoryBean': FactoryBean threw exception on object creation; nested exception is javax.persistence.PersistenceException: org.hibernate.exception.GenericJDBCException: Could not open connection
可以參考
https://blog.csdn.net/qq_41623990/article/details/83008860
在/usr/share/cmf/schema執行
bash scm_prepare_database.sh mysql -uroot -p --scm-host localhost scm scm scm
2.如果啟動的時候,/var/log/cloudera-scm-server/cloudera-scm-server.out出現,但是cloudera-scm-server.log和cmf-server-perf.log日志都沒有
Caused by: org.springframework.beans.BeanInstantiationException: Could not instantiate bean class [com.cloudera.server.cmf.log.components.ServerLogFetcherImpl]: Constructor threw exception; nested exception is java.io.FileNotFoundException: Unable to locate the Cloudera Manager log file in the log4j settings Caused by: java.io.FileNotFoundException: Unable to locate the Cloudera Manager log file in the log4j settings
查看/etc/cloudera-scm-server/log4j.properties是否為空
為空的話,加入內容,啟動cloudera-scm-server,然后cloudera-scm-server.log和cmf-server-perf.log日志都成功出現
# Copyright (c) 2012 Cloudera, Inc. All rights reserved. # # !!!!! IMPORTANT !!!!! # The Cloudera Manager server finds its log file by querying log4j. It # assumes that the first file appender in this file is the server log. # See LogUtil.getServerLogFile() for more details. # # Define some default values that can be overridden by system properties cmf.root.logger=INFO,CONSOLE cmf.log.dir=. cmf.log.file=cmf-server.log cmf.perf.log.file=cmf-server-perf.log # Define the root logger to the system property "cmf.root.logger". log4j.rootLogger=${cmf.root.logger} # Logging Threshold log4j.threshhold=ALL # Disable most JDBC tracing by default. log4j.logger.org.jdbcdslog=FATAL # Disable overly loud Avro IPC logging log4j.logger.org.apache.avro.ipc.NettyTransceiver=FATAL # Disable overly loud Flume config validation logging log4j.logger.org.apache.flume.conf.FlumeConfiguration=ERROR # Disable overly loud CXF logging log4j.logger.org.apache.cxf.phase.PhaseInterceptorChain=ERROR # Disable "Mapped URL path" messages from Spring log4j.logger.org.springframework.web.servlet.mvc.annotation.DefaultAnnotationHandlerMapping=WARN log4j.appender.CONSOLE=org.apache.log4j.ConsoleAppender log4j.appender.CONSOLE.target=System.err log4j.appender.CONSOLE.layout=org.apache.log4j.PatternLayout log4j.appender.CONSOLE.layout.ConversionPattern=%d{ISO8601} %p %t:%c: %m%n log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender log4j.appender.LOGFILE.MaxFileSize=10MB log4j.appender.LOGFILE.MaxBackupIndex=10 log4j.appender.LOGFILE.File=${cmf.log.dir}/${cmf.log.file} log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.LOGFILE.layout.ConversionPattern=%d{ISO8601} %p %t:%c: %m%n log4j.appender.LOGFILE=org.apache.log4j.RollingFileAppender log4j.appender.LOGFILE.MaxFileSize=10MB log4j.appender.LOGFILE.MaxBackupIndex=10 log4j.appender.LOGFILE.File=${cmf.log.dir}/${cmf.log.file} log4j.appender.LOGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.LOGFILE.layout.ConversionPattern=%d{ISO8601} %p %t:%c: %m%n log4j.additivity.com.cloudera.server.cmf.debug.components.PerfLogger=false log4j.logger.com.cloudera.server.cmf.debug.components.PerfLogger=INFO,PERFLOGFILE log4j.appender.PERFLOGFILE=org.apache.log4j.RollingFileAppender log4j.appender.PERFLOGFILE.MaxFileSize=10MB log4j.appender.PERFLOGFILE.MaxBackupIndex=10 log4j.appender.PERFLOGFILE.File=${cmf.log.dir}/${cmf.perf.log.file} log4j.appender.PERFLOGFILE.layout=org.apache.log4j.PatternLayout log4j.appender.PERFLOGFILE.layout.ConversionPattern=%d{ISO8601} %p %t:%c: %m%n
之后訪問 http://localhost:7180/cmf/login
向集群添加主機
填master的host
繼續
繼續
9.安裝的時候提供 SSH 登錄憑據使用root和密碼
或者使用私鑰,要保證root用戶可以用私鑰ssh過去
如果安裝的時候報
sed: can't read /etc/cloudera-scm-agent/config.ini: No such file or directory
拷貝一個正確的文件,把文件的內容補上
如果遇到 安裝失敗。 無法接收 Agent 發出的檢測信號。
將/etc/cloudera-scm-agent/config.ini中security的注釋掉,並將設置use_tls=0,就能安裝成功
安裝parcel,等待即可
完成
繼續安裝組件
分配角色
配置集群數據庫
繼續
全部啟動成功
9.安裝Manager Service,否則會包host monitor連不上
需要在這一步輸入report manager的數據庫賬號密碼,如果連不上可能是防火牆沒關,
mysql的host填localhost
10.查看parcels的下載情況,如果要離線安裝的,將parcels下載后拷貝到/opt/cloudera/parcel-repo目錄,注意修改權限一起重啟server和agent
/opt/cloudera/parcel-repo$ ls -alh 總用量 2.4G drwxr-xr-x 2 cloudera-scm cloudera-scm 4.0K 12月 15 11:31 . drwxr-xr-x 6 cloudera-scm cloudera-scm 4.0K 12月 15 00:03 .. -rwxr-xr-x 1 cloudera-scm cloudera-scm 1.9G 12月 15 11:12 CDH-5.16.2-1.cdh5.16.2.p0.8-xenial.parcel -rwxr-xr-x 1 cloudera-scm cloudera-scm 41 12月 15 11:12 CDH-5.16.2-1.cdh5.16.2.p0.8-xenial.parcel.sha -rw-r----- 1 cloudera-scm cloudera-scm 75K 12月 15 11:31 CDH-5.16.2-1.cdh5.16.2.p0.8-xenial.parcel.torrent -rw-r----- 1 cloudera-scm cloudera-scm 84M 12月 15 02:55 KAFKA-4.1.0-1.4.1.0.p0.4-xenial.parcel -rw-r----- 1 cloudera-scm cloudera-scm 41 12月 15 02:55 KAFKA-4.1.0-1.4.1.0.p0.4-xenial.parcel.sha -rw-r----- 1 cloudera-scm cloudera-scm 3.5K 12月 15 02:55 KAFKA-4.1.0-1.4.1.0.p0.4-xenial.parcel.torrent -rw-r----- 1 cloudera-scm cloudera-scm 453M 12月 15 04:29 KUDU-1.4.0-1.cdh5.12.2.p0.8-xenial.parcel -rw-r----- 1 cloudera-scm cloudera-scm 41 12月 15 04:29 KUDU-1.4.0-1.cdh5.12.2.p0.8-xenial.parcel.sha -rw-r----- 1 cloudera-scm cloudera-scm 18K 12月 15 04:29 KUDU-1.4.0-1.cdh5.12.2.p0.8-xenial.parcel.torrent -rw-r--r-- 1 cloudera-scm cloudera-scm 66K 6月 18 21:21 manifest.json
安裝kafka,分配並激活
然后點擊添加服務,選擇kafka進行安裝
失敗,查看stderr日志
發現
+ exec /opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/kafka-server-start.sh /run/cloudera-scm-agent/process/106-kafka-KAFKA_BROKER/kafka.properties 三月 08, 2020 2:29:36 下午 org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 警告: A provider nl.techop.kafka.KafkaTopicsResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider nl.techop.kafka.KafkaTopicsResource will be ignored. 三月 08, 2020 2:29:36 下午 org.glassfish.jersey.internal.inject.Providers checkProviderRuntime 警告: A provider nl.techop.kafka.TopicMetricNameResource registered in SERVER runtime does not implement any provider interfaces applicable in the SERVER runtime. Due to constraint configuration problems the provider nl.techop.kafka.TopicMetricNameResource will be ignored. Redaction rules file doesn't exist, not redacting logs. file: redaction-rules.json, directory: /run/cloudera-scm-agent/process/106-kafka-KAFKA_BROKER
所以執行命令
/opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/kafka-server-start.sh /run/cloudera-scm-agent/process/106-kafka-KAFKA_BROKER/kafka.properties
發現是沒有log4j.peoperties文件
/opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/kafka-server-start.sh /run/cloudera-scm-agent/process/106-kafka-KAFKA_BROKER/kafka.properties log4j:ERROR Could not read configuration file from URL [file:/opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/../config/log4j.properties]. java.io.FileNotFoundException: /opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/../config/log4j.properties (沒有那個文件或目錄) at java.io.FileInputStream.open0(Native Method) at java.io.FileInputStream.open(FileInputStream.java:195) at java.io.FileInputStream.<init>(FileInputStream.java:138) at java.io.FileInputStream.<init>(FileInputStream.java:93) at sun.net.www.protocol.file.FileURLConnection.connect(FileURLConnection.java:90) at sun.net.www.protocol.file.FileURLConnection.getInputStream(FileURLConnection.java:188) at org.apache.log4j.PropertyConfigurator.doConfigure(PropertyConfigurator.java:557) at org.apache.log4j.helpers.OptionConverter.selectAndConfigure(OptionConverter.java:526) at org.apache.log4j.LogManager.<clinit>(LogManager.java:127) at org.slf4j.impl.Log4jLoggerFactory.getLogger(Log4jLoggerFactory.java:66) at org.slf4j.LoggerFactory.getLogger(LoggerFactory.java:358) at com.typesafe.scalalogging.Logger$.apply(Logger.scala:48) at kafka.utils.Log4jControllerRegistration$.<init>(Logging.scala:25) at kafka.utils.Log4jControllerRegistration$.<clinit>(Logging.scala) at kafka.utils.Logging$class.$init$(Logging.scala:47) # Change the two lines below to adjust the general broker logging level (output to server.log and stdout) at com.cloudera.kafka.wrap.Kafka$.<init>(Kafka.scala:30) at com.cloudera.kafka.wrap.Kafka$.<clinit>(Kafka.scala) at com.cloudera.kafka.wrap.Kafka.main(Kafka.scala) log4j:ERROR Ignoring configuration file [file:/opt/cloudera/parcels/KAFKA-4.1.0-1.4.1.0.p0.4/lib/kafka/bin/../config/log4j.properties].
添加該文件,再執行,發現
kafka.common.InconsistentBrokerIdException: Configured broker.id 72 doesn't match stored broker.id 94 in meta.properties. If you moved your data, make sure your configured broker.id matches. If you intend to create a new broker, you should remove all data in your data directories (log.dirs).
去kafka的配置里面將broke.id從72修改成94
啟動成功