一、Impala簡介
Cloudera Impala對你存儲在Apache Hadoop在HDFS,HBase的數據提供直接查詢互動的SQL。除了像Hive使用相同的統一存儲平台,Impala也使用相同的元數據,SQL語法(Hive SQL),ODBC驅動程序和用戶界面(Hue Beeswax)。Impala還提供了一個熟悉的面向批量或實時查詢和統一平台。
hive:復雜的批處理查詢,數據轉換
Impala:實時數據查詢分析
二、Impala安裝
1.安裝要求:安裝前需要先安裝Hive
(1)軟件要求
- Red Hat Enterprise Linux (RHEL)/CentOS 6.2 (64-bit)
- CDH 4.1.0 or later
- Hive
- MySQL
(2)硬件要求:在Join查詢過程中需要將數據集加載內存中進行計算,因此對安裝Impalad的內存要求較高。
2、安裝准備
(1)操作系統版本查看
>more/etc/issue
CentOSrelease 6.2 (Final)
Kernel \ron an \m
(2)機器准備
10.28.169.112mr5
10.28.169.113mr6
10.28.169.114mr7
10.28.169.115mr8
各機器安裝角色
mr5:NameNode、ResourceManager、SecondaryNameNode、Hive、impala-state-store
mr6、mr7、mr8:DataNode、NodeManager、impalad
(3)用戶准備
在各個機器上新建用戶hadoop,並打通ssh
(4)軟件准備
到cloudera官網下載:
Hadoop:hadoop-2.0.0-cdh4.1.2.tar.gz
hive:hive-0.9.0-cdh4.1.2.tar.gz
impala:
impala-0.3-1.p0.366.el6.x86_64.rpm
impala-debuginfo-0.3-1.p0.366.el6.x86_64.rpm
impala-server-0.3-1.p0.366.el6.x86_64.rpm
impala-shell-0.3-1.p0.366.el6.x86_64.rpm
impala依賴包下載:bigtop-utils-0.4(http://beta.cloudera.com/impala/redhat/6/x86_64/impala/0/RPMS/noarch/)
其他依賴包下載地址:http://mirror.bit.edu.cn/centos/6.3/os/x86_64/Packages/
4、hadoop-2.0.0-cdh4.1.2安裝
(1)安裝包准備
hadoop用戶登錄到mr5機器,將hadoop-2.0.0-cdh4.1.2.tar.gz上傳到/home/hadoop/目錄下並解壓:
tar zxvf hadoop-2.0.0-cdh4.1.2.tar.gz
(2)配置環境變量
修改mr5機器hadoop用戶主目錄/home/hadoop/下的.bash_profile環境變量:
exportJAVA_HOME=/usr/jdk1.6.0_30
exportJAVA_BIN=${JAVA_HOME}/bin
exportCLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar
export JAVA_OPTS="-Djava.library.path=/usr/local/lib-server -Xms1024m -Xmx2048m -XX:MaxPermSize=256m -Djava.awt.headless=true-Dsun.net.client.defaultReadTimeout=600
00-Djmagick.systemclassloader=no -Dnetworkaddress.cache.ttl=300-Dsun.net.inetaddr.ttl=300"
exportHADOOP_HOME=/home/hadoop/hadoop-2.0.0-cdh4.1.2
exportHADOOP_PREFIX=$HADOOP_HOME
exportHADOOP_MAPRED_HOME=${HADOOP_HOME}
exportHADOOP_COMMON_HOME=${HADOOP_HOME}
exportHADOOP_HDFS_HOME=${HADOOP_HOME}
exportHADOOP_YARN_HOME=${HADOOP_HOME}
export PATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
exportJAVA_HOME JAVA_BIN PATH CLASSPATH JAVA_OPTS
exportHADOOP_LIB=${HADOOP_HOME}/lib
exportHADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
(3)修改配置文件
在機器mr5上hadoop用戶登錄修改hadoop的配置文件(配置文件目錄:hadoop-2.0.0-cdh4.1.2/etc/hadoop)
(1)、slaves :
添加以下節點mr6、mr7、mr8
(2)、hadoop-env.sh :
增加以下環境變量
exportJAVA_HOME=/usr/jdk1.6.0_30
exportHADOOP_HOME=/home/hadoop/hadoop-2.0.0-cdh4.1.2
exportHADOOP_PREFIX=${HADOOP_HOME}
export HADOOP_MAPRED_HOME=${HADOOP_HOME}
exportHADOOP_COMMON_HOME=${HADOOP_HOME}
exportHADOOP_HDFS_HOME=${HADOOP_HOME}
exportHADOOP_YARN_HOME=${HADOOP_HOME}
exportPATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
exportJAVA_HOME JAVA_BIN PATH CLASSPATH JAVA_OPTS
exportHADOOP_LIB=${HADOOP_HOME}/lib
exportHADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
(3)、core-site.xml :
fs.default.name
hdfs://mr5:9000
The name of the defaultfile system.Either the literal string "local" or a host:port forNDFS.
true
io.native.lib.available true
hadoop.tmp.dir /home/hadoop/tmp A base for other temporarydirectories.
(4)、hdfs-site.xml :
dfs.namenode.name.dir
file:/home/hadoop/dfsdata/name
Determines where on thelocal filesystem the DFS name node should store the name table.If this is acomma-delimited list of directories,then name table is replicated in all of thedirectories,for redundancy.
true
dfs.datanode.data.dir
file:/home/hadoop/dfsdata/data
Determines where on thelocal filesystem an DFS data node should store its blocks.If this is acomma-delimited list of directories,then data will be stored in all nameddirectories,typically on different devices.Directories that do not exist areignored.
true
dfs.replication
3
dfs.permission
false
(5)、mapred-site.xml:
mapreduce.framework.name
yarn
mapreduce.job.tracker hdfs://mr5:9001 true
mapreduce.task.io.sort.mb 512
mapreduce.task.io.sort.factor 100
mapreduce.reduce.shuffle.parallelcopies 50
mapreduce.cluster.temp.dir file:/home/hadoop/mapreddata/system true
mapreduce.cluster.local.dir file:/home/hadoop/mapreddata/local true
(6)、yarn-env.sh :
增加以下環境變量
exportJAVA_HOME=/usr/jdk1.6.0_30
exportHADOOP_HOME=/home/hadoop/hadoop-2.0.0-cdh4.1.2
exportHADOOP_PREFIX=${HADOOP_HOME}
exportHADOOP_MAPRED_HOME=${HADOOP_HOME}
exportHADOOP_COMMON_HOME=${HADOOP_HOME}
exportHADOOP_HDFS_HOME=${HADOOP_HOME}
exportHADOOP_YARN_HOME=${HADOOP_HOME}
exportPATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
exportJAVA_HOME JAVA_BIN PATH CLASSPATH JAVA_OPTS
exportHADOOP_LIB=${HADOOP_HOME}/lib
exportHADOOP_CONF_DIR=${HADOOP_HOME}/etc/hadoop
(7)、yarn-site.xml:
yarn.resourcemanager.address mr5:8080
yarn.resourcemanager.scheduler.address mr5:8081
yarn.resourcemanager.resource-tracker.address mr5:8082
yarn.nodemanager.aux-services mapreduce.shuffle
yarn.nodemanager.aux-services.mapreduce.shuffle.class org.apache.hadoop.mapred.ShuffleHandler
yarn.nodemanager.local-dirs file:/home/hadoop/nmdata/local thelocal directories used by the nodemanager
yarn.nodemanager.log-dirs file:/home/hadoop/nmdata/log thedirectories used by Nodemanagers as log directories
(4)拷貝到其他節點
(1)、在mr5上配置完第2步和第3步后,壓縮hadoop-2.0.0-cdh4.1.2
rm hadoop-2.0.0-cdh4.1.2.tar.gz
tar zcvf hadoop-2.0.0-cdh4.1.2.tar.gz hadoop-2.0.0-cdh4.1.2
然后將hadoop-2.0.0-cdh4.1.2.tar.gz遠程拷貝到mr6、mr7、mr8機器上
scp/home/hadoop/hadoop-2.0.0-cdh4.1.2.tar.gz hadoop@mr6:/home/hadoop/
scp/home/hadoop/hadoop-2.0.0-cdh4.1.2.tar.gz hadoop@mr7:/home/hadoop/
scp/home/hadoop/hadoop-2.0.0-cdh4.1.2.tar.gz hadoop@mr8:/home/hadoop/
(2)、將mr5機器上hadoop用戶的配置環境的文件.bash_profile遠程拷貝到mr6、mr7、mr8機器上
scp/home/hadoop/.bash_profile hadoop@mr6:/home/hadoop/
scp/home/hadoop/.bash_profile hadoop@mr7:/home/hadoop/
scp/home/hadoop/.bash_profile hadoop@mr8:/home/hadoop/
拷貝完成后,在mr5、mr6、mr7、mr8機器的/home/hadoop/目錄下執行
source.bash_profile
使得環境變量生效
(5)啟動hdfs和yarn
以上步驟都執行完成后,用hadoop用戶登錄到mr5機器依次執行:
hdfsnamenode -format
start-dfs.sh
start-yarn.sh
通過jps命令查看:
mr5成功啟動了NameNode、ResourceManager、SecondaryNameNode進程;
mr6、mr7、mr8成功啟動了DataNode、NodeManager進程。
(6)驗證成功狀態
通過以下方式查看節點的健康狀態和作業的執行情況:
瀏覽器訪問(本地需要配置hosts)
http://mr5:50070/dfshealth.jsp
5、hive-0.9.0-cdh4.1.2安裝
(1)安裝包准備
使用hadoop用戶上傳hive-0.9.0-cdh4.1.2到mr5機器的/home/hadoop/目錄下並解壓:
tar zxvf hive-0.9.0-cdh4.1.2
(2)配置環境變量
在.bash_profile添加環境變量:
exportHIVE_HOME=/home/hadoop/hive-0.9.0-cdh4.1.2
exportPATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin:${HIVE_HOME}/bin
exportHIVE_CONF_DIR=$HIVE_HOME/conf
exportHIVE_LIB=$HIVE_HOME/lib
添加完后執行以下命令使得環境變量生效:
..bash_profile
(3)修改配置文件
修改hive配置文件(配置文件目錄:hive-0.9.0-cdh4.1.2/conf/)
在hive-0.9.0-cdh4.1.2/conf/目錄下新建hive-site.xml文件,並添加以下配置信息:
hive.metastore.local true
javax.jdo.option.ConnectionURL jdbc:mysql://10.28.169.61:3306/hive_impala?createDatabaseIfNotExist=true
javax.jdo.option.ConnectionDriverName com.mysql.jdbc.Driver
javax.jdo.option.ConnectionUserName hadoop
javax.jdo.option.ConnectionPassword 123456
hive.security.authorization.enabled false
hive.security.authorization.createtable.owner.grants ALL
hive.querylog.location ${user.home}/hive-logs/querylog
(4)驗證成功狀態
完成以上步驟之后,驗證hive安裝是否成功
在mr5命令行執行hive,並輸入”show tables;”,出現以下提示,說明hive安裝成功:
>hive
hive>show tables;
OK
Time taken:18.952 seconds
hive>
6、impala安裝
說明:
(1)、以下1、2、3、4步是在root用戶分別在mr5、mr6、mr7、mr8下執行
(2)、以下第5步是在hadoop用戶下執行
(1)安裝依賴包:
安裝mysql-connector-java:
yum install mysql-connector-java
安裝bigtop
rpm -ivh bigtop-utils-0.4+300-1.cdh4.0.1.p0.1.el6.noarch.rpm
安裝libevent
rpm -ivhlibevent-1.4.13-4.el6.x86_64.rpm
如存在其他需要安裝的依賴包,可以到以下鏈接:
http://mirror.bit.edu.cn/centos/6.3/os/x86_64/Packages/進行下載。
(2)安裝impala的rpm,分別執行
rpm -ivh impala-0.3-1.p0.366.el6.x86_64.rpm
rpm -ivh impala-server-0.3-1.p0.366.el6.x86_64.rpm
rpm -ivh impala-debuginfo-0.3-1.p0.366.el6.x86_64.rpm
rpm -ivh impala-shell-0.3-1.p0.366.el6.x86_64.rpm
(3)找到impala的安裝目錄
完成第1步和第2步后,通過以下命令:
find / -name impala
輸出:
/usr/lib/debug/usr/lib/impala
/usr/lib/impala
/var/run/impala
/var/log/impala
未完:因為對於服務器內存要求比較嚴格,所以沒有安裝測試,就先了解了解運行機制吧