使用Kerberos保護Hadoop集群


             使用Kerberos保護Hadoop集群

                                   作者:尹正傑

版權聲明:原創作品,謝絕轉載!否則將追究法律責任。

 

 

 

  前面我們配置了Kerberos並准備好使用它來驗證Hadoop用戶。要使用Kerberos保護集群,需要將Kerberos信息添加到相關Hadoop配置文件中,從而將Kerberos與Hadoop連接。

 

一.映射服務主體

  Kerberos使用core-site.xml文件中列出的規則,通過指定"hadoop.security.auth_to_local"參數,將將kerberos主體映射到操作系統本地用戶名。默認規則(名為"DEFAULT")將服務主體的名稱轉換為它們名字的第一個組成部分。

  請記住,服務主體的名稱要么由兩個組件組成,如"jason@YINZHENGJIE.COM",要么使用三部分字符串組成,如"hdfs/hadoop101.yinzhengjie.com@YINZHENGJIE.COM"。Hadoop將Kerberos主體名稱映射到其本地用戶名。

  如果服務主體名稱的第一個部分(在本示例中為hdfs)與其它用戶名相同,則無需創建任何其他規則,因為在這種情況下,DEFAULT規則就足夠了。請注意,還可以通過在其中配置"auth_to_local"參數,將主體映射到krb5.conf文件中的用戶名。

  如果服務主體的名稱與其操作系統用戶名稱不同,則必須配置"hadoop.security.auth_to_local"參數以指定將主體名稱轉換為操作系統用戶名稱的規則。如前所述,此參數的默認值為DEFAULT。

  一個規則提供了轉換服務主體的名稱的規則,由以下三部分組成:
    Base:
      Base服務指定服務主體名稱由幾個部分構成,后跟一個冒號和用於從服務主體名稱構建用戶名的模式。在模式中,"$0"表示域,"$1"表示服務主體名稱中的第一部分,"$2"表示第二部分。
      指定格式為"[<number>:<string>]",將其應用於主體的名稱以獲得翻譯后的主體名稱,也稱為初始本地名稱。這里有幾個例子:
        如果基本格式為[1:$1:$0],則"jason@YINZHENGJIE.COM"的UPN的初始本地名稱為"jason.YINZHENGJIE.COM"。
        如果基本格式為[2:$1@$2],則"hdfs/hadoop101.yinzhengjie.com@YINZHENGJIE.COM"的SPN的初始本地名稱為"hdfs@YINZHENGJIE.COM"
    Filter:
      過濾器(或接收過濾器)是使用與生成的字符串匹配的正則表達式的組件,以便應用規則。例如,過濾器(.*YINZHENGJIE\.COM)匹配以"@YINZHENGJIE.COM"結尾的所有字符串,例如"jason.YINZHENGJIE.COM""hdfs.YINZHENGJIE.COM"。
    Substitution:
      這是一個類sed替換,並使用固定字符串替換被匹配的正則表達式的命令。規則的完整規范是:"[<number>:<string>](<正則表達式>)s/<parttern>/<replacement>/"。
      可以使用括號括住正則表達式的一部分,並在替換字符串中通過一個數字(例如"\1")來引用。替換命名"s/<parttern>/<replacement>/g"與普通的Linux替換命令一樣,g指定全局替換。
      以下是替換命名根據各種規則轉換表達式"jason@YINZHENGJIE.COM"的一些示例:
        "s/(.*)\.YINZHENGJIE.COM/\1/",匹配結果為"jason";
        "s/.YINZHENGJIE.COM//",匹配結果為"jason";
      可以提供多個規則,一旦主體與規則匹配,則會跳過其余規則。可以在末尾放置一個DEFAULT規則,之前沒有規則被命中時,將執行該DEFAULT規則。

  可以通過"hadoop.security.group.mapping"參數將用戶映射到組。默認映射實現通過本地shell命令查找用戶和組映射。

  由於此默認實現使用Linux界面來建立用戶組成員,因此必須在運行認證服務的所有服務器上配置該用戶組,例如NameNode,ResourManager和DataNodes等。組信息必須在集群上保持一致。

  如果需要使用在LDAP服務器(如Active Directory)上而不是在Hadoop集群本身配置的組,則可以使用"LdapGroupsMapping"實現。

  博主推薦閱讀:
    https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-common/core-default.xml
    https://web.mit.edu/kerberos/krb5-latest/doc/admin/conf_files/krb5_conf.html

1>.創建服務主體

[root@kdc.yinzhengjie.com ~]# kadmin.local 
Authenticating as principal root/admin@YINZHENGJIE.COM with password.
kadmin.local: 
kadmin.local:  listprincs
HTTP/kdc.yinzhengjie.com@YINZHENGJIE.COM
K/M@YINZHENGJIE.COM
hdfs/kdc.yinzhengjie.com@YINZHENGJIE.COM
kadmin/admin@YINZHENGJIE.COM
kadmin/changepw@YINZHENGJIE.COM
kadmin/kdc.yinzhengjie.com@YINZHENGJIE.COM
kiprop/kdc.yinzhengjie.com@YINZHENGJIE.COM
krbtgt/YINZHENGJIE.COM@YINZHENGJIE.COM
root/admin@YINZHENGJIE.COM
kadmin.local:  
kadmin.local:  listprincs
HTTP/kdc.yinzhengjie.com@YINZHENGJIE.COM
K/M@YINZHENGJIE.COM
hdfs/kdc.yinzhengjie.com@YINZHENGJIE.COM
kadmin/admin@YINZHENGJIE.COM
kadmin/changepw@YINZHENGJIE.COM
kadmin/kdc.yinzhengjie.com@YINZHENGJIE.COM
kiprop/kdc.yinzhengjie.com@YINZHENGJIE.COM
krbtgt/YINZHENGJIE.COM@YINZHENGJIE.COM
root/admin@YINZHENGJIE.COM
kadmin.local:  
kadmin.local:  addprinc -randkey nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  addprinc -randkey web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
WARNING: no policy specified for web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM; defaulting to no policy
Principal "web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM" created.
kadmin.local:  
kadmin.local:  listprincs
HTTP/kdc.yinzhengjie.com@YINZHENGJIE.COM
K/M@YINZHENGJIE.COM
dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
hdfs/kdc.yinzhengjie.com@YINZHENGJIE.COM
kadmin/admin@YINZHENGJIE.COM
kadmin/changepw@YINZHENGJIE.COM
kadmin/kdc.yinzhengjie.com@YINZHENGJIE.COM
kiprop/kdc.yinzhengjie.com@YINZHENGJIE.COM
krbtgt/YINZHENGJIE.COM@YINZHENGJIE.COM
nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
root/admin@YINZHENGJIE.COM
snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
kadmin.local:  
[root@kdc.yinzhengjie.com ~]# kadmin.local

2>.創建keytab文件(我這里偷懶了,將所有的服務主體打包成一個keytab啦,生產環境建議大家拆分成多個keytab文件)

[root@kdc.yinzhengjie.com ~]# kadmin.local 
Authenticating as principal root/admin@YINZHENGJIE.COM with password.
kadmin.local: 
kadmin.local:  listprincs
HTTP/kdc.yinzhengjie.com@YINZHENGJIE.COM
K/M@YINZHENGJIE.COM
dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
hdfs/kdc.yinzhengjie.com@YINZHENGJIE.COM
kadmin/admin@YINZHENGJIE.COM
kadmin/changepw@YINZHENGJIE.COM
kadmin/kdc.yinzhengjie.com@YINZHENGJIE.COM
kiprop/kdc.yinzhengjie.com@YINZHENGJIE.COM
krbtgt/YINZHENGJIE.COM@YINZHENGJIE.COM
nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
root/admin@YINZHENGJIE.COM
snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
kadmin.local:   
kadmin.local:  listprincs
HTTP/kdc.yinzhengjie.com@YINZHENGJIE.COM
K/M@YINZHENGJIE.COM
dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM
dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
hdfs/kdc.yinzhengjie.com@YINZHENGJIE.COM
kadmin/admin@YINZHENGJIE.COM
kadmin/changepw@YINZHENGJIE.COM
kadmin/kdc.yinzhengjie.com@YINZHENGJIE.COM
kiprop/kdc.yinzhengjie.com@YINZHENGJIE.COM
krbtgt/YINZHENGJIE.COM@YINZHENGJIE.COM
nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
root/admin@YINZHENGJIE.COM
snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM
web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM
kadmin.local:  
kadmin.local:  xst -k hdfs.keytab dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM dn/hadoop104.yinzhen
gjie.com@YINZHENGJIE.COM dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM web/hadoop105.yinzhengjie.com@YINZHENGJIE.COMEntry for principal dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop102.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop103.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop104.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal dn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal nn/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal snn/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop101.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes256-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type aes128-cts-hmac-sha1-96 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type des3-cbc-sha1 added to keytab WRFILE:hdfs.keytab.
Entry for principal web/hadoop105.yinzhengjie.com@YINZHENGJIE.COM with kvno 2, encryption type arcfour-hmac added to keytab WRFILE:hdfs.keytab.
kadmin.local:  
kadmin.local:  quit
[root@kdc.yinzhengjie.com ~]# 
[root@kdc.yinzhengjie.com ~]# ll
total 4
-rw------- 1 root root 3362 Oct  6 12:49 hdfs.keytab
[root@kdc.yinzhengjie.com ~]# 
[root@kdc.yinzhengjie.com ~]# kadmin.local

3>.驗證keytab文件是否可用並分發到Hadoop集群

[root@kdc.yinzhengjie.com ~]# ll
total 4
-rw------- 1 root root 3362 Oct  6 12:49 hdfs.keytab
[root@kdc.yinzhengjie.com ~]# 
[root@kdc.yinzhengjie.com ~]# scp hdfs.keytab hadoop101.yinzhengjie.com:~
root@hadoop101.yinzhengjie.com's password: 
hdfs.keytab                                                                                                                                                100% 3362     2.5MB/s   00:00    
[root@kdc.yinzhengjie.com ~]# 
[root@kdc.yinzhengjie.com ~]# 
[root@kdc.yinzhengjie.com ~]# scp hdfs.keytab hadoop101.yinzhengjie.com:~
[root@hadoop101.yinzhengjie.com ~]# ll
total 4
-rw------- 1 root root 3362 Oct  6 18:33 hdfs.keytab
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a 'src=~/hdfs.keytab dest=/yinzhengjie/softwares/hadoop/etc/hadoop/conf'
hadoop105.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "84e8689784161efa5c1e59c60efbd826be8e482c", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a84537a38eedd6db31d4359444d05a5a", 
    "mode": "0644", 
    "owner": "root", 
    "size": 3362, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601980761.27-8932-29340116574563/source", 
    "state": "file", 
    "uid": 0
}
hadoop101.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "84e8689784161efa5c1e59c60efbd826be8e482c", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a84537a38eedd6db31d4359444d05a5a", 
    "mode": "0644", 
    "owner": "root", 
    "size": 3362, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601980761.28-8930-183396353685288/source", 
    "state": "file", 
    "uid": 0
}
hadoop103.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "84e8689784161efa5c1e59c60efbd826be8e482c", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a84537a38eedd6db31d4359444d05a5a", 
    "mode": "0644", 
    "owner": "root", 
    "size": 3362, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601980761.3-8928-257802276412188/source", 
    "state": "file", 
    "uid": 0
}
hadoop102.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "84e8689784161efa5c1e59c60efbd826be8e482c", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a84537a38eedd6db31d4359444d05a5a", 
    "mode": "0644", 
    "owner": "root", 
    "size": 3362, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601980761.24-8926-157496120508863/source", 
    "state": "file", 
    "uid": 0
}
hadoop104.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "84e8689784161efa5c1e59c60efbd826be8e482c", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "a84537a38eedd6db31d4359444d05a5a", 
    "mode": "0644", 
    "owner": "root", 
    "size": 3362, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601980761.3-8929-242677670963812/source", 
    "state": "file", 
    "uid": 0
}
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a 'src=~/hdfs.keytab dest=/yinzhengjie/softwares/hadoop/etc/hadoop/conf'

 

 

二.在Hadoop配置文件中增加Kerberos的配置

  為了在Hadoop集群中啟用Kerberos身份驗證,必須將Kerberos相關信息添加到以下配置文件:
    core-site.xml
    hdfs-site.xml
    yarn-site.xml

  這些文件中配置HDFS和YARN與Kerberos一起工作。本篇博客僅針對HDFS集群做Kerberos驗證,YARN集群做Kerberos驗證流程類似。

1>.修改Hadoop核心配置文件(core-site.xml)並分發到集群節點

[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/core-site.xml
  ......

    <!-- 以下參數用於配置Kerberos -->
    <property>
        <name>hadoop.security.authentication</name>
        <value>kerberos</value>
        <description>此參數設置集群的認證類型,默認值是"simple"。當使用Kerberos進行身份驗證時,請設置為"kerberos".</description>
    </property>

    <property>
        <name>hadoop.security.authorization</name>
        <value>true</value>
        <description>此參數用於確認是否啟用安全認證,默認值為"false",我們需要啟用該功能方能進行安全認證.</description>
    </property>

    <property>
        <name>hadoop.security.auth_to_local</name>
        <value>
        RULE:[2:$1@$0](nn/.*@.*YINZHENGJIE.COM)s/.*/hdfs/
        RULE:[2:$1@$0](jn/.*@.*YINZHENGJIE.COM)s/.*/hdfs/
        RULE:[2:$1@$0](dn/.*@.*YINZHENGJIE.COM)s/.*/hdfs/
        RULE:[2:$1@$0](nm/.*@.*YINZHENGJIE.COM)s/.*/yarn/
        RULE:[2:$1@$0](rm/.*@.*YINZHENGJIE.COM)s/.*/yarn/
        RULE:[2:$1@$0](jhs/.*@.*YINZHENGJIE.COM)s/.*/mapred/
        DEFAULT
        </value>
        <description>此參數指定如何使用映射規則將Kerberos主體名映射到OS用戶名.</description>
    </property>

    <property>
        <name>hadoop.rpc.protection</name>
        <value>privacy</value>
        <description>此參數指定保護級別,有三種可能,分別為authentication(默認值,表示僅客戶端/服務器相互認值),integrity(表示保證數據的完整性並進行身份驗證),privacy(進行身份驗證並保護數據完整性,並且還加密在客戶端與服務器之間傳輸的數據)</description>    
  </property>

  ......
</configuration>
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/core-site.xml
 [root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a "src=${HADOOP_HOME}/etc/hadoop/core-site.xml dest=${HADOOP_HOME}/etc/hadoop/"
hadoop105.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "61a71ecb08f9abcc8470b5b19eab1e738282b950", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "gid": 0, 
    "group": "root", 
    "mode": "0644", 
    "owner": "root", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "size": 5765, 
    "state": "file", 
    "uid": 0
}
hadoop101.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "61a71ecb08f9abcc8470b5b19eab1e738282b950", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "gid": 190, 
    "group": "systemd-journal", 
    "mode": "0644", 
    "owner": "12334", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "size": 5765, 
    "state": "file", 
    "uid": 12334
}
hadoop104.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "61a71ecb08f9abcc8470b5b19eab1e738282b950", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "gid": 0, 
    "group": "root", 
    "mode": "0644", 
    "owner": "root", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "size": 5765, 
    "state": "file", 
    "uid": 0
}
hadoop103.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "61a71ecb08f9abcc8470b5b19eab1e738282b950", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "gid": 0, 
    "group": "root", 
    "mode": "0644", 
    "owner": "root", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "size": 5765, 
    "state": "file", 
    "uid": 0
}
hadoop102.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "61a71ecb08f9abcc8470b5b19eab1e738282b950", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "gid": 0, 
    "group": "root", 
    "mode": "0644", 
    "owner": "root", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/core-site.xml", 
    "size": 5765, 
    "state": "file", 
    "uid": 0
}
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a "src=${HADOOP_HOME}/etc/hadoop/core-site.xml

2>.修改HDFS集群配置文件(hdfs-site.xml)並分發到集群節點

  需要在hdfs-site.xml中配置守護程序的keyteb文件位置和主體名稱。不需要手動配置大量DataNotes。Hadoop提供了一個名為"_HOST"的變量,可以使用該變量動態配置,而不必在集群中的每個節點單獨配置每個HDFS守護程序。

  當用戶或服務連接到集群時,"_HOST"變量被解析為服務器的FQDN。記住,zookeeper和Hive不支持"_HOST"變量的規范。
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
    ......

    <!-- 使用以下配置參數配置Kerberos服務主體 -->
    <property>
        <name>dfs.namenode.kerberos.principal</name>
        <value>nn/_HOST@YINZHENGJIE.COM</value>
        <description>此參數指定NameNode的Kerberos服務主體名稱。通常將其設置為nn/_HOST@REALM.TLD。每個NameNode在啟動時都將_HOST替換為其自己的標准主機名。_HOST占位符允許在HA設置中的兩個NameNode上使用相同的配置設置。</description>    
  </property>

    <property>
        <name>dfs.secondary.namenode.kerberos.principal</name>
        <value>snn/_HOST@YINZHENGJIE.COM</value>
        <description>此參數指定Secondary NameNode的Kerberos主體名稱。</description>
    </property>

    <property>
        <name>dfs.web.authentication.kerberos.principal</name>
        <value>web/_HOST@YINZHENGJIE.COM</value>
        <description>NameNode用於WebHDFS SPNEGO身份驗證的服務器主體。啟用WebHDFS和安全性時需要。</description>
    </property>
   
    <property>
        <name>dfs.namenode.kerberos.internal.spnego.principal</name>
        <value>web/_HOST@YINZHENGJIE.COM</value>
        <description>啟用Kerberos安全性時,NameNode用於Web UI SPNEGO身份驗證的服務器主體。若不設置該參數,默認值為"${dfs.web.authentication.kerberos.principal}"</description>
    </property>
   
    <property>
        <name>dfs.secondary.namenode.kerberos.internal.spnego.principal</name>
        <value>web/_HOST@YINZHENGJIE.COM</value>
        <description>啟用Kerberos安全性時,Secondary NameNode用於Web UI SPNEGO身份驗證的服務器主體。與其他所有Secondary NameNode設置一樣,在HA設置中將忽略它。默認值為"${dfs.web.authentication.kerberos.principal}"</description>    
  </property>

    <property>
        <name>dfs.datanode.kerberos.principal</name>
        <value>dn/_HOST@YINZHENGJIE.COM</value>
        <description>此參數指定DataNode服務主體。通常將其設置為dn/_HOST@REALM.TLD。每個DataNode在啟動時都將_HOST替換為其自己的標准主機名。_HOST占位符允許在所有DataNode上使用相同的配置設置</description>    
  </property>

    <property>
        <name>dfs.block.access.token.enable</name>
        <value>true</value>
        <description>如果為"true",則訪問令牌用作訪問數據節點的功能。如果為"false",則在訪問數據節點時不檢查訪問令牌。默認值為"false"</description>
    </property>

    <!-- 使用以下配置參數指定keytab文件 -->
    <property>
        <name>dfs.web.authentication.kerberos.keytab</name>
        <value>/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab</value>
        <description>http服務主體的keytab文件位置,即"dfs.web.authentication.kerberos.principal"對應的主體的密鑰表文件。</description>
    </property>

    <property>
        <name>dfs.namenode.keytab.file</name>
        <value>/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab</value>
        <description>每個NameNode守護程序使用的keytab文件作為其服務主體登錄。主體名稱使用"dfs.namenode.kerberos.principal"配置。</description>
    </property>

    <property>
        <name>dfs.datanode.keytab.file</name>
        <value>/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab</value>
        <description>每個DataNode守護程序使用的keytab文件作為其服務主體登錄。主體名稱使用"dfs.datanode.kerberos.principal"配置。</description>
    </property>

    <property>
        <name>dfs.secondary.namenode.keytab.file</name>
        <value>/yinzhengjie/softwares/hadoop/etc/hadoop/conf/hdfs.keytab</value>
        <description>每個Secondary Namenode守護程序使用的keytab文件作為其服務主體登錄。主體名稱使用"dfs.secondary.namenode.kerberos.principal"配置。</description>
    </property>


    <!-- DataNode SASL配置,若不指定可能導致DataNode啟動失敗 -->
    <property>
        <name>dfs.data.transfer.protection</name>
        <value>integrity</value>
        <description>逗號分隔的SASL保護值列表,用於在讀取或寫入塊數據時與DataNode進行安全連接。可能的值為:"authentication"(僅表示身份驗證,沒有完整性或隱私), "integrity"(意味着啟用了身份驗證和完整性)和"privacy"(意味着所有身份驗證,完整性和隱私都已啟用)。如果dfs.encrypt.data.transfer設置為true,則它將取代dfs.data.transfer.protection的設置,並強制所有連接必須使用專門的加密SASL握手。對於與在特權端口上偵聽的DataNode的連接,將忽略此屬性。在這種情況下,假定特權端口的使用建立了足夠的信任。</description>    
  </property>

    <property>
        <name>dfs.http.policy</name>
        <value>HTTP_AND_HTTPS</value>
        <description>確定HDFS是否支持HTTPS(SSL)。默認值為"HTTP_ONLY"(僅在http上提供服務),"HTTPS_ONLY"(僅在https上提供服務,DataNode節點設置該值),"HTTP_AND_HTTPS"(同時提供服務在http和https上,NameNode和Secondary NameNode節點設置該值)。</description>    
  </property>

   ......
</configuration>
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
[root@hadoop101.yinzhengjie.com ~]# vim ${HADOOP_HOME}/etc/hadoop/hdfs-site.xml
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a "src=${HADOOP_HOME}/etc/hadoop/hdfs-site.xml dest=${HADOOP_HOME}/etc/hadoop/"
hadoop102.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "b342d14e02a6897590ce45681db0ca2ac692beb3", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "64f6435d1a3370e743be63165a1f8428", 
    "mode": "0644", 
    "owner": "root", 
    "size": 11508, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601984867.8-10949-176752949394383/source", 
    "state": "file", 
    "uid": 0
}
hadoop101.yinzhengjie.com | SUCCESS => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": false, 
    "checksum": "b342d14e02a6897590ce45681db0ca2ac692beb3", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "gid": 190, 
    "group": "systemd-journal", 
    "mode": "0644", 
    "owner": "12334", 
    "path": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "size": 11508, 
    "state": "file", 
    "uid": 12334
}
hadoop103.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "b342d14e02a6897590ce45681db0ca2ac692beb3", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "64f6435d1a3370e743be63165a1f8428", 
    "mode": "0644", 
    "owner": "root", 
    "size": 11508, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601984867.84-10951-232652152197493/source", 
    "state": "file", 
    "uid": 0
}
hadoop105.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "b342d14e02a6897590ce45681db0ca2ac692beb3", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "64f6435d1a3370e743be63165a1f8428", 
    "mode": "0644", 
    "owner": "root", 
    "size": 11508, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601984867.86-10955-87808746957801/source", 
    "state": "file", 
    "uid": 0
}
hadoop104.yinzhengjie.com | CHANGED => {
    "ansible_facts": {
        "discovered_interpreter_python": "/usr/bin/python"
    }, 
    "changed": true, 
    "checksum": "b342d14e02a6897590ce45681db0ca2ac692beb3", 
    "dest": "/yinzhengjie/softwares/hadoop/etc/hadoop/hdfs-site.xml", 
    "gid": 0, 
    "group": "root", 
    "md5sum": "64f6435d1a3370e743be63165a1f8428", 
    "mode": "0644", 
    "owner": "root", 
    "size": 11508, 
    "src": "/root/.ansible/tmp/ansible-tmp-1601984867.82-10952-228427124500370/source", 
    "state": "file", 
    "uid": 0
}
[root@hadoop101.yinzhengjie.com ~]# 
[root@hadoop101.yinzhengjie.com ~]# ansible all -m copy -a "src=${HADOOP_HOME}/etc/hadoop/hdfs-site.xml dest=${HADOOP_HOME}/etc/hadoop/"

 

三.驗證Kerberos服務是否配置成功

1>.啟用Kerberos成功后命令行和NameNode的Web UI實例無法訪問HDFS集群

  如下圖所示,如果在沒有進行Kerberos認證時,我們的HDFS client是無法訪問HDFS集群的喲~

2>.命令行進行身份認證之后,即可訪問HDFS集群

3>.

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM