docker部署hadoop只是實驗目的,每個服務都是通過手動部署,比如namenode, datanode, journalnode等。如果為了靈活的管理集群,而不使用官方封裝好的自動化部署腳本,本文還是有些啟發的。
准備基礎鏡像
准備jdk鏡像
注意,openjdk啟動datanode的時候,jvm會崩潰。所以換成oraclejdk。
基礎鏡像以alpine為基礎,上面裝上jdk。Dockerfile如下。
1、openjdk1.8
FROM alpine:latest
MAINTAINER rabbix@qq.com
RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \
apk --no-cache --update add openjdk8-jre-base bash && \
rm -rf /var/cache/apk/*
ENV JAVA_HOME=/usr/lib/jvm/default-jvm
ENV PATH=$PATH:$JAVA_HOME/bin
docker build . -t alpine-jdk8:v1.0
2、oraclejdk1.8
下面這種方式需要手動下載glibc。
下載地址:https://github.com/sgerrand/alpine-pkg-glibc/releases/
sgerrand.rsa.pub在項目的readme中有下載地址
FROM alpine:latest
MAINTAINER rabbix@qq.com
ADD sgerrand.rsa.pub /etc/apk/keys/
COPY glibc-2.27-r0.apk /opt/
RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \
apk add /opt/glibc-2.27-r0.apk && rm -rf /opt/glibc-2.27-r0.apk && \
apk --no-cache --update add bash && \
rm -rf /var/cache/apk/*
ADD jdk-8u172-linux-x64.tar.gz /opt/
ENV JAVA_HOME=/opt/jdk1.8.0_172
ENV PATH=$PATH:$JAVA_HOME/bin
自動下載glibc
FROM alpine:latest
MAINTAINER rabbix@qq.com
RUN echo -e "https://mirrors.aliyun.com/alpine/v3.7/main\nhttps://mirrors.aliyun.com/alpine/v3.7/community" > /etc/apk/repositories && \
wget -q -O /etc/apk/keys/sgerrand.rsa.pub https://raw.githubusercontent.com/sgerrand/alpine-pkg-glibc/master/sgerrand.rsa.pub && \
wget https://github.com/sgerrand/alpine-pkg-glibc/releases/download/2.27-r0/glibc-2.27-r0.apk && \
apk add glibc-2.27-r0.apk && rm -rf glibc-2.27-r0.apk && \
apk --no-cache --update add bash && \
rm -rf /var/cache/apk/*
ADD jdk-8u172-linux-x64.tar.gz /opt/
ENV JAVA_HOME=/opt/jdk1.8.0_172
ENV PATH=$PATH:$JAVA_HOME/bin
准備hadoop鏡像
因為hadoop是以nohup的方式后台運行的,所以需要修改一下啟動腳本。這里使用的是當前穩定版2.9.1。
腳本位置 hadoop-2.9.1/sbin/hadoop-daemon.sh
修改前
case $command in
namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc|portmap|nfs3|dfsrouter)
if [ -z "$HADOOP_HDFS_HOME" ]; then
hdfsScript="$HADOOP_PREFIX"/bin/hdfs
else
hdfsScript="$HADOOP_HDFS_HOME"/bin/hdfs
fi
nohup nice -n $HADOOP_NICENESS $hdfsScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
;;
(*)
nohup nice -n $HADOOP_NICENESS $hadoopScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1 < /dev/null &
;;
esac
echo $! > $pid
sleep 1
修改后
152 case $command in
153 namenode|secondarynamenode|datanode|journalnode|dfs|dfsadmin|fsck|balancer|zkfc|portmap|nfs3|dfsrouter)
154 if [ -z "$HADOOP_HDFS_HOME" ]; then
155 hdfsScript="$HADOOP_PREFIX"/bin/hdfs
156 else
157 hdfsScript="$HADOOP_HDFS_HOME"/bin/hdfs
158 fi
159 $hdfsScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1
160 ;;
161 (*)
162 $hadoopScript --config $HADOOP_CONF_DIR $command "$@" > "$log" 2>&1
163 ;;
164 esac
165 echo $! > $pid
166 sleep 1
修改之后重新壓縮回tar.gz
Dockerfile如下:
FROM alpine-jdk1.8:v1.0
MAINTAINER rabbix@qq.com
ADD ./hadoop-2.9.1.tar.gz /opt/
docker build . -t hadoop2.9.1:v1.0
配置docker容器的ip
docker network create --subnet=172.16.0.0/16 dn0
准備配置文件
新建目錄:
mkdir -p {nn,snn,dn}/{logs,data,etc}
修改配置文件
復制 hadoop-2.9.1/etc/hadoop/ 下面所有的文件到 nn/etc/
復制/etc/hosts 到 nn/etc/
core-site.xml
<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://master:9001</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>/tmp/hdfs-root/</value>
</property>
</configuration>
hdfs-site.xml
<configuration>
<property>
<name>dfs.replication</name>
<value>1</value>
</property>
<property>
<name>dfs.namenode.http-address</name>
<value>master:50071</value>
</property>
<property>
<name>dfs.datanode.http.address</name>
<value>slave1:50076</value>
</property>
<property>
<name>dfs.namenode.secondary.http-address</name>
<value>secondary:50091</value>
</property>
<property>
<name>dfs.datanode.address</name>
<value>slave1:50011</value>
</property>
<property>
<name>dfs.datanode.ipc.address</name>
<value>slave1:50021</value>
</property>
</configuration>
etc/hosts
127.0.0.1 localhost localhost.localdomain localhost4 localhost4.localdomain4
::1 localhost localhost.localdomain localhost6 localhost6.localdomain6
172.16.0.2 master
172.16.0.3 secondary
172.16.0.4 slave1
主機名中不允許有下划線
啟動命令
namenode
因為namenode第一次需要初始化。
先執行初始化命令
docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
--name namenode -p 9001:9001 -p 50071:50071 \
-v /root/hadoop/nn/data/:/tmp/hdfs-root \
-v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/nn/etc/hosts:/etc/hosts hadoop2.9.1:v1.0 \
/opt/hadoop-2.9.1/bin/hdfs namenode -format
后啟動namenode
docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
--name namenode -p 9001:9001 -p 50071:50071 \
-v /root/hadoop/nn/data/:/tmp/hdfs-root \
-v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/nn/etc/hosts:/etc/hosts hadoop2.9.1:v1.0 \
/opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start namenode
合並在一起:
docker run -d --rm --net dn0 --ip 172.16.0.2 -h master \
--name namenode -p 9001:9001 -p 50071:50071 \
-v /root/hadoop/nn/data/:/tmp/hdfs-root \
-v /root/hadoop/nn/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/nn/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/nn/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
sh -c "/opt/hadoop-2.9.1/bin/hdfs namenode -format && /opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start namenode"
3.1.1的啟動命令修改為:
bin/hdfs --config .... namenode
bin/hdfs --config .... datanode
secondarynamenode
datanode
新增加的datanode保證其數據目錄為空,不要與其他datanode有沖突。
配置文件中的主機名也應該是自己的主機名或者域名。例如:
<property>
<name>dfs.datanode.address</name>
<value>slave2:50011</value>
</property>
啟動:
docker run -d --rm --net dn0 --ip 172.16.0.4 -h slave1 \
--name datanode1 -p 50011:50011 -p 50021:50021 -p 50076:50076 \
-v /root/hadoop/dn1/data/:/tmp/hdfs-root \
-v /root/hadoop/dn1/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/dn1/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/dn1/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
/opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode
docker run -d --rm --net dn0 --ip 172.16.0.5 -h slave2 \
--name datanode2 -p 50012:50012 -p 50022:50022 -p 50077:50077 \
-v /root/hadoop/dn2/data/:/tmp/hdfs-root \
-v /root/hadoop/dn2/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/dn2/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/dn2/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
/opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode
docker run -d --rm --net dn0 --ip 172.16.0.6 -h slave3 \
--name datanode3 -p 50013:50013 -p 50023:50023 -p 50078:50078 \
-v /root/hadoop/dn3/data/:/tmp/hdfs-root \
-v /root/hadoop/dn3/etc/:/opt/hadoop-2.9.1/etc/hadoop \
-v /root/hadoop/dn3/logs:/opt/hadoop-2.9.1/logs \
-v /root/hadoop/dn3/etc/hosts:/etc/hosts oraclejdk1.8-hadoop2.9.1:latest \
/opt/hadoop-2.9.1/sbin/hadoop-daemon.sh --config /opt/hadoop-2.9.1/etc/hadoop --script hdfs start datanode
上傳文件
./hdfs dfs -fs hdfs://172.16.0.2:9001 -mkdir /user
./hdfs dfs -fs hdfs://172.16.0.2:9001 -mkdir /user/root
./hdfs dfs -fs hdfs://172.16.0.2:9001 -put hadoop input
在瀏覽器訪問: http://master:50071 就可以看到namenode的管理界面了。
擴展一台datanode
復制以分datanode的配置文件。適當修改。
增加新節點的hosts,和其他節點的hosts。
啟動新的節點。
查看文件塊信息
./hdfs fsck -conf /root/hadoop/nn/etc/hdfs-site.xml -fs hdfs://172.16.0.2:9001 /user/root/hadoop-2.9.1.tar.gz -blocks