前期博客
大數據領域兩大最主流集群管理工具Ambari和Cloudera Manger
Ambari架構原理
Ambari安裝之Ambari安裝前准備(CentOS6.5)(一)
Ambari安裝之部署本地庫(鏡像服務器)(二)
部署hdp單節點集群
(1)部署Agent
1) 注冊並安裝agent
http://192.168.80.144:8080/views/ADMIN_VIEW/2.2.1.0/INSTANCE/#/
2)為集群取個名字,我這里,就直接取為hdpCluster,然后點擊next
http://192.168.80.144:8080/#/installer/step0
3)選擇HDP2.4的版本並選擇高級配置,注意和自己下載安裝的版本一致。
http://192.168.80.144:8080/#/installer/step1
需要改成以下地址
http://192.168.80.144/hdp2.4/centos6/
4)配置本地源地址並點擊next
5)添加目標主機的主機名並配置ambari-server的私鑰,保證其他節點能夠和ambari-server免密碼通信,並選擇對應的hadoop用戶(因為我們是在hadoop用戶下配置的SSH免密碼登錄)然后點擊注冊。
http://192.168.80.144:8080/#/installer/step2
[hadoop@ambari01 ~]$ pwd /home/hadoop [hadoop@ambari01 ~]$ cd .ssh [hadoop@ambari01 .ssh]$ pwd /home/hadoop/.ssh [hadoop@ambari01 .ssh]$ ll total 16 -rw-------. 1 hadoop hadoop 1588 Mar 30 17:00 authorized_keys -rw-------. 1 hadoop hadoop 1675 Mar 30 16:15 id_rsa -rw-------. 1 hadoop hadoop 397 Mar 30 16:15 id_rsa.pub -rw-r--r--. 1 hadoop hadoop 1620 Mar 30 17:49 known_hosts [hadoop@ambari01 .ssh]$ cat id_rsa
-----BEGIN RSA PRIVATE KEY----- MIIEogIBAAKCAQEAvLAEPDRhnQUq4+6IRYTF6YKmMfvfGKKbkgEX5RrZ89BQbiOm jqWrcb8yAi6zFY/uHCM6cBG/BzdmHPlTYZwAmt8qI4hs/5NvkjLUmlwFe2+fYofZ 6kRfuJh4eEyysiLhZBEkgb4UYtDQgvB12eFBgieHSkl2+nUVorgvbnIbcqoAz/fN 4d9iU5oa5pShjQkAL1NKUmLZAh1PcFSq1OGGGEtsWFp7ggt8ufahejyZeqstbWl0 vAxohuvYdW5YjIHJhLP7ld7arsv9f40RMNEdPuWOTWegM6p94oFRAIln9Wtcc271 jQoF3xjhqUpV17PU3ErZ6+wsVukZ3iMtP/PqPQIBIwKCAQB2moZRuoZ/9J6d5mRI 9F8lEEs1XH2adNbQzXy75P4G9/gKt1LAEF0i7TVgdSAcLpWrSVfurBGsw7yHPaIg GOpv+f066An/u8J5J0POvX/J7mQvThPyVt0U8h/Wlpw1dQKz7YSpUug+HNrV9jtz Ap40jeACzxeWHbXT/r66c5w5cRciB4eFQ14xO3FZyfCcD5AjAWYNyze7mI5i8396 VscwVCd2qUsMQnjR6RXQd/vK3KJ62S0rxKQ0UC5+H5OxVny9m9q+8Qy53iEMtX/n GzDph2OGTGHBrR/+kOjdwx9kXy5FknL5Q1EITeERI0NcFmwN1UlEyaAGkDNf88ye hzjDAoGBAO7yyfNTcQpy0ZdAhVDWDb+ohKt83ucrkiW87dXHPPo/QEJOZCl2SsVt bB4p4gEUcpxy5rgkgB0JAuvnAv4JZ49I+NOASOnVpuhty0qGzRmvk1soGQn6TyfK HwybRLXTHUiQfx0UQFTrbNdpubx0CKT0fBKBBviejyfSOE59pM//AoGBAMonCpo6 a+TvjNr0TgwbyzhPHdmRBnZDXkctQIo/YE704l+eoywbKGty9MlWJ1lGZTFlnZej Xxe2Uhb0UGPo+VyCccBxc4slz1TaoQbRnpLV+s7+Mik/atG9kwB41Bd2/HjRWFAa x1LyGN5ee2hocD4u5C/x0vrzulp+5wH0poXDAoGBAIG2/+p9wQWsC2C8oCSRdS2H XfaxgFGbT1ZQnl4bs2NG6F6CU6F6uuA0Fh8AyyUoW3mANBrR/GeIjI6wmzly0dFw wZdi5cDEcIzN42L4uHuodJCSHDid0zLbb/DmkwOefZxrsrgDreT01K9z6Hw+/WDc fd4oyUUi3/+sojk85HDpAoGBALjTPOTHsxp0ngoD75YKyG3/MTvyTw0KZNNckseK Zq6WwFdsd+3Pr+015x56p6IUecbDTkF/bOJ6zrXmr+ZRWQQfffHG0AoxMpa5QsRn 4XBOnCr3CUpInC16IABueMT/Erea1GZ+4h/zSe/hWuMdqHNeEnT6Wn8KuQJII6oE QHpLAoGAYNNuiUgLrqRq8Klb4Fj0pbwWzrvNkON+j01mIEzPeNNto01GbLXKQwhe mbWMSnLHarmFWJ7Yamagzx1I/ifRjUUFLchcxLH0VDv0e1ZYaD1FV2IQNJNS4gWE m8Xbq7v4bjOmZvAfVoorH+gnvh0SMNTyFGq+rSB9wCsII3nLGPo= -----END RSA PRIVATE KEY-----
經過一段時間后,
那是因為,如下:
6)注冊過程中可能會遇到一些問題,比如openssl的版本問題,這個時候我們只需要在對應節點上更新一下openssl的版本即可,然后重新注冊。
[hadoop@ambari02 .ssh]$ sudo rpm -qa | grep openssl openssl-1.0.1e-15.el6.x86_64 [hadoop@ambari01 .ssh]$ sudo yum install openssl Loaded plugins: fastestmirror, refresh-packagekit, security Setting up Install Process Loading mirror speeds from cached hostfile * base: mirrors.zju.edu.cn * extras: mirrors.zju.edu.cn * updates: mirrors.zju.edu.cn Resolving Dependencies --> Running transaction check ---> Package openssl.x86_64 0:1.0.1e-15.el6 will be updated ---> Package openssl.x86_64 0:1.0.1e-48.el6_8.4 will be an update --> Finished Dependency Resolution Dependencies Resolved =============================================================================================================================================================================================== Package Arch Version Repository Size =============================================================================================================================================================================================== Updating: openssl x86_64 1.0.1e-48.el6_8.4 updates 1.5 M Transaction Summary =============================================================================================================================================================================================== Upgrade 1 Package(s) Total download size: 1.5 M Is this ok [y/N]: y Downloading Packages: openssl-1.0.1e-48.el6_8.4.x86_64.rpm | 1.5 MB 00:01 Running rpm_check_debug Running Transaction Test Transaction Test Succeeded Running Transaction Updating : openssl-1.0.1e-48.el6_8.4.x86_64 1/2 Cleanup : openssl-1.0.1e-15.el6.x86_64 2/2 Verifying : openssl-1.0.1e-48.el6_8.4.x86_64 1/2 Verifying : openssl-1.0.1e-15.el6.x86_64 2/2 Updated: openssl.x86_64 0:1.0.1e-48.el6_8.4 Complete! [hadoop@ambari02 .ssh]$
然后,再來重新注冊
7)注冊成功之后,我們還要查看一下警告信息,一定要在部署hadoop組件之前把所有的警告信息都消除掉。
由此,可見,需要如下來做。
8)比如時鍾同步問題,我們可以通過如下方式解決
[hadoop@ambari02 ~]$ sudo service ntpd status ntpd is stopped [hadoop@ambari02 ~]$ sudo service ntpd start Starting ntpd: [ OK ] [hadoop@ambari02 ~]$
9)下面這個問題的解決方法 The following hosts have Transparent Huge Pages (THP) enabled。THP should be disabled to avoid potential Hadoop performance issues.
關閉Transparent HugePages的辦法: 在linux的root用戶下下執行
echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag
echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/enabled
echo never > /sys/kernel/mm/transparent_hugepage/defrag
[hadoop@ambari02 ~]$ su root Password: [root@ambari02 hadoop]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/defrag [root@ambari02 hadoop]# echo never > /sys/kernel/mm/redhat_transparent_hugepage/enabled [root@ambari02 hadoop]# echo never > /sys/kernel/mm/transparent_hugepage/enabled [root@ambari02 hadoop]# echo never > /sys/kernel/mm/transparent_hugepage/defrag [root@ambari02 hadoop]#
10)然后,重新檢測,保證沒有警告之后,我們點擊next
(2)部署HDFS
1)選擇我們要安裝的組件名稱,然后點擊next
http://192.168.80.144:8080/#/installer/step4
我這里為了演示給大家后續的如何去新增一個進來。就只選擇 HDFS + Zookeeper + Ambari Metrics。
2)如果沒有其他疑問,就一直點擊next,ambari就會進入自動的安裝部署過程。
http://192.168.80.144:8080/#/installer/step6
帶着看看信息
當然,如果牽扯到高級優化等,或者一些其他自定義的,以后自己在搭建好之后,是可以返回來改的!
等待一段時間。
3)中間可能會出現一些問題,我們只需要針對性的解決就行,比如下面的問題
這里是,自動就會檢測出問題出來。我們直接Next,后面來手動處理它!。即到ambari02機器上去。
Traceback (most recent call last): File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/service_check.py", line 165, in <module> AMSServiceCheck().execute() File "/usr/lib/python2.6/site-packages/resource_management/libraries/script/script.py", line 219, in execute method(env) File "/usr/lib/python2.6/site-packages/ambari_commons/os_family_impl.py", line 89, in thunk return fn(*args, **kwargs) File "/var/lib/ambari-agent/cache/common-services/AMBARI_METRICS/0.1.0/package/scripts/service_check.py", line 160, in service_check raise Fail("Values %s and %s were not found in the response." % (random_value1, current_time)) resource_management.core.exceptions.Fail: Values 0.32072445008 and 1490951249000 were not found in the response.
其實,這一步每個人安裝時,出現的錯誤不一樣。具體是報什么錯誤,去百度就好。
請移步
安裝ambari的時候遇到的ambari和hadoop問題集
OK,到此為止,我們剛才安裝的所有組件就都能夠成功啟動了.
(3)部署Mapreduce和YARN
1)接下來我們就來演示一下如何添加新的服務
2)然后我們選擇需要添加的服務,然后按照提示點擊next即可,ambari會進入自動安裝並啟動
Python script has been killed due to timeout after waiting 300 secs
具體,請移步
ambari的安裝以及集群部署
Python script has been killed due totimeoutafter waiting 1800 secs
vim /etc/ambari-server/conf/ambari.properties(此錯誤為ambari-server ssh連接ambari-agent安裝超時)
agent.package.install.task.timeout=1800更改為9600(時間可根據網絡情況繼續調整)
說白了,就是,跟大家的網速有關。
或者
3)一段時間之后,我們發現所有的服務就都啟動起來了
Ambari部署時問題之Ambari Metrics無法啟動
Ambari里如何刪除某指定的服務(圖文詳解)
Ambari安裝之部署 (Metrics Collector和 Metrics Monitor) Install Pending ...問題
然后,成功解決了,如下
(4)運行MapReduce程序
實際上在mapreduce檢測的過程中,系統已經跑過一個mapreduce進行測試了
OK,到此為止,我們的單節點集群就部署成功了。