一、JDK1.8的安裝
- 添加ppa
sudo add-apt-repository ppa:webupd8team/java sudo apt-get update
-
安裝Oracle-java-installer
sudo apt-get install oracle-java8-installer
這條語句可以默認確認條款:echo oracle-java8-installer shared/accepted-oracle-license-v1-1 select true | sudo /usr/bin/debconf-set-selec
-
設置系統默認jdk
jdk7 切換到jdk8
sudo update-java-alternatives -s java-8-oracle - 測試jdk 是是否安裝成功:
java -version javac -version
- 若選擇下載安裝包安裝
下載: wget http://download.oracle.com/otn-pub/java/jdk/8u151-b12/e758a0de34e24606bca991d704f6dcbf/jdk-8u151-linux-x64.tar.gz 創建目錄: sudo mkdir /usr/lib/jvm 解壓縮至目標目錄: sudo tar -zxvfjdk-8u151-linux-x64.tar.gz -C /usr/lib/jvm 修改環境變量: sudo vim ~/.bashrc 文件的末尾追加下面內容: #set oracle jdk environment export JAVA_HOME=/usr/lib/jvm/jdk1.8.0_151 ## 這里要注意目錄要換成自己解壓的jdk 目錄 export JRE_HOME=${JAVA_HOME}/jre export CLASSPATH=.:${JAVA_HOME}/lib:${JRE_HOME}/lib export PATH=${JAVA_HOME}/bin:$PATH 使環境變量馬上生效 source ~/.bashrc 設置系統默認jdk 版本 sudo update-alternatives --install /usr/bin/java java /usr/lib/jvm/jdk1.8.0_151/bin/java 300 sudo update-alternatives --install /usr/bin/javac javac /usr/lib/jvm/jdk1.8.0_151/bin/javac 300 sudo update-alternatives --install /usr/bin/jar jar /usr/lib/jvm/jdk1.8.0_151/bin/jar 300 sudo update-alternatives --install /usr/bin/javah javah /usr/lib/jvm/jdk1.8.0_151/bin/javah 300 sudo update-alternatives --install /usr/bin/javap javap /usr/lib/jvm/jdk1.8.0_151/bin/javap 300 sudo update-alternatives --config java java -version
二、下載安裝配置Hadoop3.0
- 下載Hadoop wget http://mirror.bit.edu.cn/apache/hadoop/common/hadoop-3.0.0/hadoop-3.0.0.tar.gz
- 解壓縮至/usr/local/hadoop3
- 配置環境變量
vi /etc/profile 末尾添加
#Hadoop 3.0 export HADOOP_HOME=/usr/local/hadoop3 export PATH=$PATH:$HADOOP_HOME/bin:$HADOOP_HOME/sbin
export HADOOP_COMMON_HOME=$HADOOP_HOME
export HADOOP_HDFS_HOME=$HADOOP_HOME
export HADOOP_MAPRED_HOME=$HADOOP_HOME
export HADOOP_YARN_HOME=$HADOOP_HOME
export HADOOP_INSTALL=$HADOOP_HOME
export HADOOP_COMMON_LIB_NATIVE_DIR=$HADOOP_HOME/lib/native
export HADOOP_CONF_DIR=$HADOOP_HOME
export HADOOP_PREFIX=$HADOOP_HOME
export HADOOP_LIBEXEC_DIR=$HADOOP_HOME/libexec
export JAVA_LIBRARY_PATH=$HADOOP_HOME/lib/native:$JAVA_LIBRARY_PATH
export HADOOP_CONF_DIR=$HADOOP_PREFIX/etc/hadoopexport HDFS_DATANODE_USER=root
export HDFS_DATANODE_SECURE_USER=root
export HDFS_SECONDARYNAMENODE_USER=root
export HDFS_NAMENODE_USER=rootsource /etc/profile
- 配置環境變量
- 配置文件
修改/usr/local/hadoop3/etc/hadoop/core-site.xml,配置hdfs端口和地址,臨時文件存放地址
<configuration> <property> <name>fs.default.name</name> <value>hdfs://ha01:9000</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/home/hadoop3/hadoop/tmp</value> </configuration>
#hdfs://ha01:9000 中ha01是主機名,下面是永久修改hostname的方法
1.修改network文件# vi /etc/sysconfig/network
NETWORKING=yes
HOSTNAME=ha01 //在這修改hostname
NISDOMAIN=eng-cn.platform.com
2.修改/etc/hosts里面的名字
# vi /etc/hosts
127.0.0.1 localhost.localdomain localhost
172.17.33.169 ha01 //在這修改hostname修改hdfs-site.xml 配置副本個數以及數據存放的路徑 <configuration> <property> <name>dfs.replication</name> <value>2</value> </property> <property> <name>dfs.namenode.name.dir</name> <value>/home/hadoop3/hadoop/hdfs/name</value> </property> <property> <name>dfs.namenode.data.dir</name> <value>/home/hadoop3/hadoop/hdfs/data</value> </property> </configuration>
修改mapred-site.xml,配置使用yarn框架執行mapreduce處理程序,與之前版本多了后面兩部 不配置mapreduce.application.classpath這個參數mapreduce運行時會報錯: Error: Could not find or load main class org.apache.hadoop.mapreduce.v2.app.MRAppMaster <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.application.classpath</name> <value> /usr/local/hadoop3/etc/hadoop, /usr/local/hadoop3/share/hadoop/common/*, /usr/local/hadoop3/share/hadoop/common/lib/*, /usr/local/hadoop3/share/hadoop/hdfs/*, /usr/local/hadoop3/share/hadoop/hdfs/lib/*, /usr/local/hadoop3/share/hadoop/mapreduce/*, /usr/local/hadoop3/share/hadoop/mapreduce/lib/*, /usr/local/hadoop3/share/hadoop/yarn/*, /usr/local/hadoop3/share/hadoop/yarn/lib/* </value> </property> </configuration>
修改yar-site.xml <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.resourcemanager.hostname</name> <value>ha01</value> </property> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> </configuration>
workers文件里添加主機名 ha02 ha03
- Hadoop設置完成,現在實現分布式
通過克隆linux或者復制hadoop文件夾的方式構建其它節點 scp -r /usr/local/hadoop3 root@ha02:/usr/local scp -r /usr/local/hadoop3 root@ha03:/usr/local 復制時候ha02無法解析,此時需要我們在系統hosts文件中聲明 192.168.160.101 ha01 192.168.160.102 ha02 192.168.160.103 ha03
- hadoop節點需要設置免密碼登錄。
ssh-keygen -t rsa //生成密鑰id-rsa、公鑰id-rsa.pub 將公鑰的內容復制到需要ssh免密碼登陸的機器的~/.ssh/authorized_keys文件中。 例如:A機器中生成密鑰及公鑰,然后將公鑰內容復制到B機器的authorized_keys文件中,這樣變實現了A免密碼ssh登陸B。
一.SSH免密登錄
1.1、檢查是否可以免密匙登錄
[root@master ~]# ssh localhostThe authenticity of host 'localhost (::1)' can't be established.1.2CentOS默認沒有啟動ssh無密登錄,去掉/etc/ssh/sshd_config其中2行的注釋,每台服務器都要設置,
#RSAAuthentication yes
#PubkeyAuthentication yes1.3生成密鑰輸入命令 ssh-keygen -t rsa 然后一路回車即可1.4復制到公共密鑰中cp /root/.ssh/id_rsa.pub /root/.ssh/authorized_keys1.5再次登錄,即可免密匙[root@master ~]# ssh localhost
Last login: Thu Oct 20 15:47:22 2016 from 192.168.0.100