Ubuntu kylin優麒麟下配置Hadoop環境
查看JDK目錄
cd /usr/lib/jvm/java-8-openjdk-amd64
查看Hadoop目錄
cd /usr/local/hadoop
查看IP地址
ifconfig
ssh服務開啟(如果沒有開啟)
Linux系統的ssh要打開,不然后面連接不上HDFS。
1.問題:
ssh連接時報以下錯誤:
$ ssh root@192.168.199.22 root@192.168.199.22's password: Permission denied, please try again.
2.原因:
系統默認禁止root用戶登錄ssh
3. 解決:
(1)修改/etc/ssh/sshd_config文件
vim /etc/ssh/sshd_config
PermitRootLogin without-password
改為
PermitRootLogin yes
(2)重啟ssh
sudo service ssh restart
Hadoop配置
查看Hadoop的etc文件
cd /usr/local/hadoop/etc/hadoop/
配置$HADOOP_HOME/etc/hadoop下的hadoop-env.sh文件
vim hadoop-env.sh
# The java implementation to use.
#export JAVA_HOME=${JAVA_HOME}
export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64
export HADOOP_CONF_DIR=/usr/local/hadoop/etc/hadoop
配置$HADOOP_HOME/etc/hadoop下的core-site.xml文件
vim core-site.xml
注意IP地址與路徑<value>/home/cai/simple/soft/hadoop-2.7.1/tmp</value>
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <!-- HDFS file path --> <property> <name>fs.default.name</name> <value>hdfs://192.168.1.248:9000</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.1.248:9000</value> </property> <property> <name>fs.defaultFS</name> <value>hdfs://192.168.1.248:9000</value> </property> <property> <name>io.file.buffer.size</name> <value>131072</value> </property> <property> <name>hadoop.tmp.dir</name> <value>/usr/local/hadoop/tmp</value> <description>Abasefor other temporary directories.</description> </property> </configuration>
配置$HADOOP_HOME/etc/hadoop下的hdfs-site.xml文件
vim hdfs-site.xml
這里需要注意的是:如果找不到 /hdfs 文件,可以把文件路徑改為 /tmp/dfs 下查找name與data文件
<value>/home/cai/simple/soft/hadoop-2.7.1/hdfs/name</value>
<?xml version="1.0" encoding="UTF-8"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>dfs.namenode.name.dir</name> <value>/usr/local/hadoop/tmp/dfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/usr/local/hadoop/tmp/dfs/data</value> </property> <!-- <property> <name>dfs.namenode.name.dir</name> <value>/home/cai/simple/soft/hadoop-2.7.1/etc/hadoop /hdfs/name</value> </property> <property> <name>dfs.datanode.data.dir</name> <value>/home/cai/simple/soft/hadoop-2.7.1/etc/hadoop /hdfs/data</value> </property> --> <property> <name>dfs.replication</name> <value>1</value> </property> <property> <name>dfs.webhdfs.enabled</name> <value>true</value> </property> </configuration>
配置$HADOOP_HOME/etc/hadoop下的mapred-site.xml文件
vim mapred-site.xml
如果沒有mapred-site.xml,需要自己新創建一個mapred-site.xml。
cp mapred-site.xml.template mapred-site.xml
<?xml version="1.0"?> <?xml-stylesheet type="text/xsl" href="configuration.xsl"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <!-- Put site-specific property overrides in this file. --> <configuration> <property> <name>mapreduce.framework.name</name> <value>yarn</value> </property> <property> <name>mapreduce.jobhistory.address</name> <value>192.168.1.248:10020</value> </property> <property> <name>mapreduce.jobhistory.webapp.address</name> <value>192.168.1.248:19888</value> </property> </configuration>
配置$HADOOP_HOME/etc/hadoop下的yarn-site.xml文件
vim yarn-site.xml
<?xml version="1.0"?> <!-- Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at http://www.apache.org/licenses/LICENSE-2.0 Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. See accompanying LICENSE file. --> <configuration> <!-- Site specific YARN configuration properties --> <property> <name>yarn.nodemanager.aux-services</name> <value>mapreduce_shuffle</value> </property> <property> <name>yarn.nodemanager.aux-services.mapreduce.shuffle.class</name> <value>org.apache.hadoop.mapred.ShuffleHandler</value> </property> <property> <name>yarn.resourcemanager.address</name> <value>192.168.1.248:8032</value> </property> <property> <name>yarn.resourcemanager.scheduler.address</name> <value>192.168.1.248:8030</value> </property> <property> <name>yarn.resourcemanager.resource-tracker.address</name> <value>192.168.1.248:8035</value> </property> <property> <name>yarn.resourcemanager.admin.address</name> <value>192.168.1.248:8033</value> </property> <property> <name>yarn.resourcemanager.webapp.address</name> <value>192.168.1.248:8088</value> </property> </configuration>
配置/etc/profile文件
vim /etc/profile
# /etc/profile: system-wide .profile file for the Bourne shell (sh(1)) # and Bourne compatible shells (bash(1), ksh(1), ash(1), ...). if [ "$PS1" ]; then if [ "$BASH" ] && [ "$BASH" != "/bin/sh" ]; then # The file bash.bashrc already sets the default PS1. # PS1='\h:\w\$ ' if [ -f /etc/bash.bashrc ]; then . /etc/bash.bashrc fi else if [ "`id -u`" -eq 0 ]; then PS1='# ' else PS1='$ ' fi fi fi if [ -d /etc/profile.d ]; then for i in /etc/profile.d/*.sh; do if [ -r $i ]; then . $i fi done unset i fi #java environment export JAVA_HOME=/usr/lib/jvm/java-8-openjdk-amd64 export CLASSPATH=.:${JAVA_HOME}/jre/lib/rt.jar:${JAVA_HOME}/lib/dt.jar:${JAVA_HOME}/lib/tools.jar export PATH=$PATH:${JAVA_HOME}/bin export HADOOP_HOME=/usr/local/hadoop export PATH=$PATH:${JAVA_HOME}/bin:${HADOOP_HOME}/bin:${HADOOP_HOME}/sbin
更新配置文件
讓配置文件生效,需要執行命令source /etc/profile
source /etc/profile
格式化NameNode
格式化NameNode,在任意目錄下執行 hdfs namenode -format 或者 hadoop namenode -format ,實現格式化。
hdfs namenode -format
或者
hadoop namenode -format
啟動Hadoop集群
啟動Hadoop進程,首先執行命令start-dfs.sh,啟動HDFS系統。
start-dfs.sh
啟動yarn集群
start-yarn.sh
jps查看配置信息
jps
UI測試
測試HDFS和yarn(推薦使用火狐瀏覽器)有兩種方法,一個是在命令行中打開,另一個是直接雙擊打開
firefox
端口:8088與50070端口
首先在瀏覽器中輸入http://172.16.12.37:50070/ (HDFS管理界面)(此IP是自己虛擬機的IP地址,端口為固定端口)每個人的IP不一樣,根據自己的IP地址來定。。。
在瀏覽器中輸入http://172.16.12.37:8088/(MR管理界面)(此IP是自己虛擬機的IP地址,端口為固定端口)每個人的IP不一樣,根據自己的IP地址來定。。。