Impala編譯部署-5單機部署-1


1.1  文件拷貝

將編譯生成並復制的可執行文件,一起拷貝到本機某目錄下,比如/root/impala2

1.2  操作系統

1.2.1 安裝JDK

安裝JDK。方法同編譯部分。

1.2.2 環境變量

~/.bashrc

 
         

export JAVA_HOME=/usr/lib/jvm/java-1.8.0-openjdk-1.8.0.65-3.b17.el7.x86_64

 
         

export PATH=$PATH:$JAVA_HOME/bin

 
         

export CLASSPATH=.:$JAVA_HOME/lib/dt.jar:$JAVA_HOME/lib/tools.jar

 

 

 

~/.bash_profile

 

 
         

export IMPALA_HOME=/root/impala2

 
         

source /etc/default/impala

 

 

 

1.3  配置

1.3.1 Impala配置文件

將impala運行必須的配置放到文件/etc/default/impala中,大概內容如下

 

IMPALA_STATE_STORE_HOST=127.0.0.1

IMPALA_STATE_STORE_PORT=24000

IMPALA_BACKEND_PORT=22000

IMPALA_LOG_DIR=/var/log/impala

IMPALA_CATALOG_SERVICE_HOST=127.0.0.1

 

export IMPALA_STATE_STORE_ARGS=${IMPALA_STATE_STORE_ARGS:- \

-log_dir=${IMPALA_LOG_DIR} -state_store_port=${IMPALA_STATE_STORE_PORT}}

 

export IMPALA_SERVER_ARGS=" \

-log_dir=${IMPALA_LOG_DIR} \

-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \

-state_store_port=${IMPALA_STATE_STORE_PORT} \

-use_statestore \

-state_store_host=${IMPALA_STATE_STORE_HOST} \

-be_port=${IMPALA_BACKEND_PORT}"

 

export ENABLE_CORE_DUMPS=${ENABLE_COREDUMPS:-false}

 

export IMPALA_CATALOG_ARGS=" \

-catalog_service_host=${IMPALA_CATALOG_SERVICE_HOST} \

-catalog_service_port=26000"

 

export HADOOP_HOME="${IMPALA_HOME}/hadoop/"

export PATH=$PATH:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin

 

export LD_LIBRARY_PATH=$LD_LIBRARY_PATH:$HADOOP_HOME/lib/native

export LD_LIBRARY_PATH=$IMPALA_HOME/lib64/:$LD_LIBRARY_PATH

 

for f in $IMPALA_HOME/dependency/*.jar; do

    export CLASSPATH=$CLASSPATH:$f

done

export CLASSPATH=$CLASSPATH:/etc/hadoop/

 

export CATALOGCMD="${IMPALA_HOME}/be/catalog/catalogd ${IMPALA_CATALOG_ARGS}"

export STATESTORECMD="${IMPALA_HOME}/be/statestore/statestored ${IMPALA_STATE_STORE_ARGS}"

export IMPALADCMD="${IMPALA_HOME}/be/service/impalad ${IMPALA_SERVER_ARGS}"

 

 

1.3.2 hadoop配置文件

mv ${IMPALA_HOME}/etc/hadoop   /etc

 

1.3.2.1 core-site.xml

<configuration>

<property>

        <name>fs.default.name</name>

        <value>hdfs://localhost:9000</value>

</property>

<property>

        <name>hadoop.tmp.dir</name>

        <value>/usr/local/hadoop/tmp</value>

</property>

</configuration>

 

1.3.2.2 hdfs-site.xml

<configuration>

<property>

           <name>dfs.replicatioin</name>

           <value>1</value>

</property>

<property>

           <name>dfs.name.dir</name>

           <value>/usr/local/hadoop/hdfs/name</value>

</property>

<property>

           <name>dfs.data.dir</name>

           <value>/usr/local/hadoop/hdfs/data</value>

</property>

<property>

               <name>dfs.client.read.shortcircuit</name>

               <value>true</value>

</property>

 

<property>

               <name>dfs.domain.socket.path</name>

               <value>/var/run/hdfs-sockets/dn</value>

</property>

 

<property>

               <name>dfs.client.file-block-storage-locations.timeout.millis</name>

               <value>10000</value>

</property>

 

<property>

               <name>dfs.datanode.hdfs-blocks-metadata.enabled</name>

               <value>true</value>

</property>

</configuration>

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM