1.默認安裝好hadoop並且能正常啟動(只需hdfs即可)
2.安裝如下rpm包(需要root權限 注意順序)
bigtop-utils-0.7.0+cdh5.8.2+0-1.cdh5.8.2.p0.5.el6.noarch.rpm
impala-kudu-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
impala-kudu-catalog-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
impala-kudu-state-store-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
impala-kudu-server-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
impala-kudu-shell-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
impala-kudu-udf-devel-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
安裝命令如下:
rpm -ivh ./bigtop-utils-0.7.0+cdh5.8.2+0-1.cdh5.8.2.p0.5.el6.noarch.rpm
rpm -ivh ./impala-kudu-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm --nodeps //需要取消依賴安裝,不然安裝不過
rpm -ivh ./impala-kudu-catalog-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh ./impala-kudu-state-store-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh ./impala-kudu-server-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh ./impala-kudu-shell-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
rpm -ivh ./impala-kudu-udf-devel-2.7.0+cdh5.9.0+0-1.cdh5.9.0.p0.11.el6.x86_64.rpm
其中catalog和state-store只能主節點一個(可安裝於不同的主機) server和shell可以多台(可跟catalog和state-store不是同一台)
3.配置環境
1.修改/etc/default/bigtop-utils文件
export JAVA_HOME=/usr/java/jdk1.8.0_65 //設置java home
2.修改/etc/default/impala文件
IMPALA_CATALOG_SERVICE_HOST=172.16.104.120 //為catalog主機Ip 也可以主機名 注意配置hosts
IMPALA_STATE_STORE_HOST=172.16.104.120 //為state-store主機Ip
IMPALA_LOG_DIR=/var/log/impala //配置日志路徑 默認為/var/log/impala
3.在/etc/impala/conf.dist目錄下 添加core-site.xml和hdfs-site.xml文件(建議從hadoop配置文件中拷貝)
其中core-site.xml添加內容如下:
<!-- impala -->
<property>
<name>dfs.client.read.shortcircuit</name>
<value>true</value>
</property>
<property>
<name>dfs.client.read.shortcircuit.skip.checksum</name>
<value>false</value>
</property>
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
hdfs-site.xml添加內容如下:
<!--impala-->
<property>
<name>dfs.datanode.hdfs-blocks-metadata.enabled</name>
<value>true</value>
</property>
<property>
<name>dfs.block.local-path-access.user</name>
<value>impala</value>
</property>
<property>
<name>dfs.client.file-block-storage-locations.timeout.millis</name>
<value>60000</value>
</property>
4.啟動服務
service impala-catalog start
service impala-state-store start
service impala-server start
5.驗證
第一種方式:
ps -aux|grep impala-catalog
ps -aux|grep impala-state
ps -aux|grep impalad
第二種方式:
impala-shell(默認連接本機的server)
impala-shell -i 172.16.104.120 //連接指定ip的server impala-shell 如果是no connect狀態 可以輸入connect 172.16.104.120進行連接
第三種方式(webUI):
172.16.104.120:25000
172.16.104.120:25010
172.16.104.120:25020
6.其他
Impala Daemon(Impala 守護進程前端端口):21000 >> impala-shell, Beeswax, Cloudera ODBC 1.2 驅動 用於傳遞命令和接收結果
Impala Daemon(Impala 守護進程前端端口):21050 >> 被使用 JDBC 或 Cloudera ODBC 2.0 及以上驅動的諸如 BI 工具之類的應用用來傳遞命令和接收結果
Impala Daemon(Impala 守護進程后端端口):22000 >> Impala 守護進程用該端口互相通訊
Impala Daemon(StateStore訂閱服務端口):23000 >> Impala 守護進程監聽該端口接收來源於 state store 的更新
StateStore Daemon(StateStore 服務端口):24000 >> State store 監聽該端口的 registration/unregistration 請求
Catalog Daemon(StateStore 服務端口):26000 >> 目錄服務使用該端口與Imp
Impala Daemon(HTTP 服務器端口):25000 >> Impala web 接口,管理員用於監控和故障排除
StateStore Daemon(HTTP 服務器端口):25010 >> StateStore web 接口,管理員用於監控和故障排除
Catalog Daemon(HTTP 服務器端口):25020 >> 目錄服務 web 接口,管理員用於監控和故障排除,Impala 1.2 開始使用
