一、配置KDC服務
由於使用的是內網機器,這里使用rpm包安裝。需要的rpm包括:
服務端:krb5-server, krb5-workstation, krb5-libs,libkadm5
客戶端:krb5-workstation, krb5-libs,libkadm5
下載地址:http://mirror.centos.org/centos/7/updates/x86_64/Packages
安裝的時候可能報找不到words的rpm包:rpm -ivh words-3.0-22.el7.noarch.rpm
- KDC服務配置(服務端配置)
vim /var/kerberos/krb5kdc/kdc.conf
# 更改下面的地方即可,配置自己的域名為HADOOP.COM, 授權票的有效期為1天,免密7天
[realms]
HADOOP.COM = {
#master_key_type = aes256-cts
max_life = 1d
max_renewable = 7d
acl_file = /var/kerberos/krb5kdc/kadm5.acl
dict_file = /usr/share/dict/words
admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
} - KRB5配置(客戶端配置)
vim /etc/krb5.conf
[libdefaults] pkinit_anchors = /etc/pki/tls/crets/ca-bundle.crt default_realm = HADOOP.COM udp_preference_limit = 1 # default_ccache_name = ... [realms] HADOOP.COM = { kdc = server的主機名, 我的這里是hadoop001 admin_server = server的主機名, hadoop001 } # 將krb配置文件發送到其他節點 scp /etc/krb5.conf root@hadoop00x:/etc/ - 配置
# 生成數據庫, 設置密碼,123456
kdb5_util create -s
# 創建管理員賬號
kadmin.local -q "addprinc admin/admin@HADOOP.COM"
# 賦予kerberos管理員所有的權限
vim /var/kerberos/krb5kdc/kadm5.ac1
*/admin@HADOOP.COM *
# 開啟服務並設置自啟
systemctl enable krb5kdc
systemctl enable kadmin
systemctl start krb5kdc
systemctl start kadmin
二、在CDH上開啟Kerberos認證
1)創建CM管理用戶, 記住密碼后面要用
kadmin.local -q "addprinc cloudera-scm/admin"
2)進入CM界面,啟動Kerberos認證
3)確保以下配置都已經完成
4)KDC類型:MIT KDC; Kerberos加密類型:aes128-cts, des3-hmac-sha1, arcfour-hmac; 填寫KDC所在服務所在的主機
5)不勾選 “通過CM管理krb5.conf”
6)輸入cm的kerberos的管理賬號, 點擊繼續,直到安裝結束



三、kerberos常用命令
- 創建用戶和keytab文件
# 創建linux用戶
useradd -m baron
echo "123456" | passwd baron --stdin
# 創建kerberos用戶
kadmin.local -q "addprinc -pw 123456 baron"
# 生成keytab文件
kadmin.local
ktadd -k /home/baron/baron.keytab -norandkey baron
# 查看keytab問價
klist -kt /home/baron/baron.keytab - 在CM啟動Kerberos過程中,CM會自動創建Princpal,訪問集群的所有資源都需要相應的賬號密碼進行訪問,否則無法通過Kerberos的認證
# 查看當前所有的princpal
kadmin.local -q "list_princpals"
# 創建一個hdfs超級用戶,一般一個服務對應一個user,每一個節點上都需要創建相應的linux用戶
kadmin.local -q "addprinc hdfs"
kadmin.local
ktadd -k /home/hdfs/hdfs.keytab -norandkey hdfs
# 將keytab發送到每一個節點
scp -r /home/hdfs/hdfs.keytab root@hadoop00x:/home/hdfs/
# 在每一個節點init
kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM 或者 kinit hdfs -> 輸入密碼
四、datax中配置Kerberos以及shell中init
- Shell中使用
#!/bin/bash # 首先需要kinit登錄 kinit -kt /home/hdfs/hdfs.keytab hdfs@HADOOP.COM if ! klist -s then echo "kerberos no init ----" exit 1 else # 執行程序 echo "success" fi
- Datax中配置
{ "job": { "setting": { "speed": { "channel": 1 } }, "content": [ { "reader": { "name": "hdfsreader", "parameter": { "path": "/workspace/*", "defaultFS": "hdfs://hadoop001:8020", "column": [ { "index": 0, "type": "long" }, { "index": 1, "type": "string" }, { "index": 2, "type": "double" } ], "fileType": "text", "encoding": "UTF-8", "fieldDelimiter": ",", "haveKerberos": true, "kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab", "kerberosPrincipal": "hdfs@HADOOP.COM" } }, "writer": { "name": "streamwriter", "parameter": { "print": true } } } ] } }
{ "job": { "setting": { "speed": { "channel": 1 } }, "content": [ { "reader": { "name": "mysqlreader", "parameter": { "username": "root", "password": "root", "column": [ "uid", "event_type", "time" ], "splitPk": "uid", "connection": [ { "table": [ "action" ], "jdbcUrl": [ "jdbc:mysql://node:3306/aucc" ] } ] } }, "writer": { "name": "hdfswriter", "parameter": { "defaultFS": "hdfs://hadoop001:8020", "fileType": "text", "path": "/workspace", "fileName": "u", "column": [ { "name": "uid", "type": "string" }, { "name": "event_type", "type": "string" }, { "name": "time", "type": "string" } ], "writeMode": "append", "fieldDelimiter": "\t", "compress":"bzip2", "haveKerberos": true, "kerberosKeytabFilePath": "/home/hdfs/hdfs.keytab", "kerberosPrincipal": "hdfs@HADOOP.COM" } } } ] } }
五、禁用Kerberos
1、停止集群的所有服務
2、Zookeeper:
1)Zookeeper的enableSecurity為false(取消勾選)
2)Zookeeper的Enable Kerberos Authentication為false(取消勾選)
3、修改hdfs的配置
1)hadoop.security.authentication選擇simple
2)hadoop.security.authorization選擇false(取消勾選)
3)修改dfs.datanode.data.dir.perm的數據目錄權限為755
4)修改DataNode服務的端口號,dfs.datanode.address,9866 (for Kerberos) 改為 50010 (default);dfs.datanode.http.address,1006 (for Kerberos) 改為 9864 (default)
4、修改HBase的配置
1)hbase.security.authentication修改為simple
2)hbase.security.authorization選擇false(取消勾選)
3)hbase.thrift.security.qop選擇none
5、可能存在HBase啟動不了,需要設置下zookeeper目錄權限,跳過檢查
zookeeper中的配置項中搜索 “Zookeeper Server中java配置項” 增加 -Dzookeeper.kipACL=true