挖坑:handoop2.6 開啟kerberos(全流程學習記錄)


目錄:

  1.涉及插件簡介

  2.安裝步驟

  3.日志錯誤查看

  

1.kerberos是什么東西

度娘指導:

  Kerberos 是一種網絡認證協議,其設計目標是通過密鑰系統為 客戶機 / 服務器 應用程序提供強大的認證服務。該認證過程的實現不依賴於主機操作系統的認證,無需基於主機地址的信任,不要求網絡上所有主機的物理安全,並假定網絡上傳送的數據包可以被任意地讀取、修改和插入數據。在以上情況下, Kerberos 作為一種可信任的第三方認證服務,是通過傳統的密碼技術(如:共享密鑰)執行認證服務的。

  認證過程具體如下:客戶機向認證服務器(AS)發送請求,要求得到某服務器的證書,然后 AS 的響應包含這些用客戶端密鑰加密的證書。證書的構成為: 1) 服務器 “ticket” ; 2) 一個臨時加密密鑰(又稱為會話密鑰 “session key”) 。客戶機將 ticket (包括用服務器密鑰加密的客戶機身份和一份會話密鑰的拷貝)傳送到服務器上。會話密鑰可以(現已經由客戶機和服務器共享)用來認證客戶機或認證服務器,也可用來為通信雙方以后的通訊提供加密服務,或通過交換獨立子會話密鑰為通信雙方提供進一步的通信加密服務

  上述認證交換過程需要只讀方式訪問 Kerberos 數據庫。但有時,數據庫中的記錄必須進行修改,如添加新的規則或改變規則密鑰時。修改過程通過客戶機和第三方 Kerberos 服務器(Kerberos 管理器 KADM)間的協議完成。有關管理協議在此不作介紹。另外也有一種協議用於維護多份 Kerberos 數據庫的拷貝,這可以認為是執行過程中的細節問題,並且會不斷改變以適應各種不同數據庫技術。

  Hadoop提供了兩種安全配置simple和kerberos。

  simple為簡單認證控制用戶名和所屬組,很容易被冒充。

  Kerberos為借助kerberos服務生成秘鑰文件所有機器公用,有秘鑰文件才是可信節點。本文主要介紹kerberos的配置。Kerberos也存在問題,配置復雜,切更改麻煩,需從新生成秘鑰切分發所有機器。

2.為什么還要用到SASL

  SASL是一個更加通用的身份認證接口,其接口在設計上可以兼容很多主流的認證方案。很多項目在做身份認證的時候,也是采用的SASL接口和流程。

  在真正使用kerberos進行身份認證時,我們一般不直接使用kerberos的接口。而是會使用諸如GSSAPI或者SASL等更通用的一些標准接口。之所以這么做,是因為:

  •   kerberos的接口更瑣碎
  •   SASL和GSSAPI都是IETF標准,他們對身份認證這一行為做了更宏觀的抽象。從而可以靈活的接入不同的認證方案。

2.安裝步驟

  前提:已經安裝好hadoop集群

  主機:10.1.4.32(namenode),10.1.4.34.10.1.4.36,主機名分別是host32,host34,host36

  hadoop版本:hadoop-2.6.0-cdh5.7.2

  linux版本:centos 7

1.安裝kerberos,以10.1.4.32為KDC

yum install krb5-libs krb5-server krb5-workstation

2.修改三個配置文件:

/etc/krb5.conf

# Configuration snippets may be placed in this directory as well
includedir /etc/krb5.conf.d/

[logging]
 default = FILE:/var/log/krb5libs.log
 kdc = FILE:/var/log/krb5kdc.log
 admin_server = FILE:/var/log/kadmind.log

[libdefaults]
 dns_lookup_realm = false
 ticket_lifetime = 24h
 renew_lifetime = 7d
 forwardable = true
 rdns = false
 pkinit_anchors = /etc/pki/tls/certs/ca-bundle.crt
 default_realm = STA.COM #修改為自己指定的域
# default_ccache_name = KEYRING:persistent:%{uid} #該行注釋

[realms]
 STA.COM = {
  kdc = host32 #KDC主機名
  admin_server = host32 #同上
 }

[domain_realm]
# .example.com = EXAMPLE.COM
# example.com = EXAMPLE.COM

 

 只需要修改備注地方就行,其中需要注意的是 "default_ccache_name = KEYRING:persistent:%{uid}" 把這行注釋掉,否則后面登錄時有可能報錯找不到可信認證,具體原因暫不探究:

hdfs GSS initiate failed [Caused by GSSException: No valid credentials provided (Mechanism level: Failed to find any Kerberos tgt)]

 

 另外,票據是會過期的,在這個配置文件中可以更改認證的生命時間,以及重刷認證的有效期

ticket_lifetime = 24h
renew_lifetime = 7d

 

 2./var/kerberos/krb5kdc/kdc.conf

[kdcdefaults]
 kdc_ports = 88
 kdc_tcp_ports = 88

[realms]
 STA.COM = {
  #master_key_type = aes256-cts
  acl_file = /var/kerberos/krb5kdc/kadm5.acl
  dict_file = /usr/share/dict/words
  admin_keytab = /var/kerberos/krb5kdc/kadm5.keytab
  supported_enctypes = aes256-cts:normal aes128-cts:normal des3-hmac-sha1:normal arcfour-hmac:normal camellia256-cts:normal camellia128-cts:normal des-hmac-sha1:normal des-cbc-md5:normal des-cbc-crc:normal
 }

 

保持默認即可,這里有一個配置supported_enctypes,如果需要改更改加密方式,應該需要改這里的配置,我保持默認.

另外此處有一個操作容易遺漏:需要下載jdk的加密包並傳到本地$JAVA_HOME/jre/lib/security,如果hadoop是手動指定的JAVA_HOME,則拷貝到相應目錄

Cryptography Extension (JCE) Unlimited Strength Jurisdiction Policy File

 

下載后將兩個jar包:local_policy.jar,US_export_policy.jar復制到相應目錄,注意不管是客戶端還是服務端都需要這個操作.否則將會報初始化錯誤或者

javax.security.auth.login.LoginException: Unable to obtain password from user

 

3./var/kerberos/krb5kdc/kadm5.acl

*/admin@STA.COM    *

 

改一下域

3.創建Kerberos數據庫

db5_util create -r STA.COM –s

 

如果只定義一個realm的話,可以不需要-r,db5

如果需要重建數據庫,將/var/kerberos/krb5kdc目錄下的principal相關的文件刪除即可.

4.啟動kerberos並設置為開啟啟動

chkconfig krb5kdc on
chkconfig kadmin on
service krb5kdc start
service kadmin start
service krb5kdc status

 

5.生成kerberos用戶

添加規則:addprinc 表示新增principle

kadmin.local -q "addprinc -randkey udap/host32@STA.COM"
kadmin.local -q "addprinc -randkey udap/host34@STA.COM"
kadmin.local -q "addprinc -randkey udap/host36@STA.COM"
 
kadmin.local -q "addprinc -randkey HTTP/host32@STA.COM"
kadmin.local -q "addprinc -randkey HTTP/host34@STA.COM"
kadmin.local -q "addprinc -randkey HTTP/host36@STA.COM"

 

生成keytab:xst表示生成keytab

kadmin.local -q "xst  -k udap-unmerged.keytab  udap/host32@ZHENGYX.COM"
kadmin.local -q "xst  -k udap-unmerged.keytab  udap/host34@STA.COM"
kadmin.local -q "xst  -k udap-unmerged.keytab  udap/host36@STA.COM"
 
kadmin.local -q "xst  -k HTTP.keytab  HTTP/host32@STA.COM"
kadmin.local -q "xst  -k HTTP.keytab  HTTP/host34@STA.COM"
kadmin.local -q "xst  -k HTTP.keytab  HTTP/host36@STA.COM"

 

合成同一個keytab后發往各個hadooop節點

$ ktutil
ktutil: rkt udap-unmerged.keytab
ktutil: rkt HTTP.keytab
ktutil: wkt udap.keytab
查看
klist -ket  udap.keytab

 

rkt表示展示,wkt表示寫入

scp udap.keytab host32:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
scp udap.keytab host34:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
scp udap.keytab host36:/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop
ssh host32 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"
ssh host34 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"
ssh host36 "chown udap:udap /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab ;chmod 400 /home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab"

 

6.現在每個hadoop節點中都有了這個keytab,將他們配置到xml文件中,中間的配置過程比較艱辛,直接貼結果了:

core-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>
<property>
<name>fs.defaultFS</name>
<value>hdfs://host32</value>
</property>
<property>
<name>hadoop.tmp.dir</name>
<value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp</value>
<description>Abase for other temporary directories.</description>
</property>
<property>
  <name>hadoop.security.authentication</name>
  <value>kerberos</value>
</property>

<property>
  <name>hadoop.security.authorization</name>
  <value>true</value>
</property>
<property>
  <name>fs.permissions.umask-mode</name>
  <value>027</value>
</property>
</configuration>

 

其中fs.permissions.umask-mode配置的是新建文件夾的默認權限,配合kerberos管理,027與750是同一個意思,轉另一篇,hadoop文件權限管理.

yarn-site.xml

<?xml version="1.0"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->
<configuration>

<!-- Site specific YARN configuration properties -->

        <property>
                <name>yarn.resourcemanager.hostname</name>
                <value>host32</value>
        </property>
        <property>
                <name>yarn.nodemanager.aux-services</name>
                <value>mapreduce_shuffle</value>
        </property>

<property>
  <name>yarn.resourcemanager.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>yarn.resourcemanager.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>yarn.nodemanager.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>yarn.nodemanager.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property></configuration>

 

hdfs-site.xml

<?xml version="1.0" encoding="UTF-8"?>
<?xml-stylesheet type="text/xsl" href="configuration.xsl"?>
<!--
  Licensed under the Apache License, Version 2.0 (the "License");
  you may not use this file except in compliance with the License.
  You may obtain a copy of the License at

    http://www.apache.org/licenses/LICENSE-2.0

  Unless required by applicable law or agreed to in writing, software
  distributed under the License is distributed on an "AS IS" BASIS,
  WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
  See the License for the specific language governing permissions and
  limitations under the License. See accompanying LICENSE file.
-->

<!-- Put site-specific property overrides in this file. -->

<configuration>

        <property>
                <name>dfs.namenode.secondary.http-address</name>
                <value>10.1.4.32:50090</value>
        </property>
        <property>
                <name>dfs.replication</name>
                <value>2</value>
        </property>
        <property>
                <name>dfs.namenode.name.dir</name>
                <value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp/dfs/name</value>
        </property>
        <property>
                <name>dfs.datanode.data.dir</name>
                <value>file:/home/udap/app/hadoop-2.6.0-cdh5.7.2/tmp/dfs/data</value>
        </property>
<property>
<name>dfs.datanode.max.xcievers</name>
<value>4096</value>
<description>max number of file which can be opened in a datanode</description>
</property>

<property>
  <name>dfs.block.access.token.enable</name>
  <value>true</value>
</property>
<property>
  <name>dfs.namenode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.namenode.kerberos.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.namenode.kerberos.https.principal</name>
  <value>HTTP/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.datanode.address</name>
  <value>0.0.0.0:1034</value>
</property>
<property>
  <name>dfs.datanode.http.address</name>
  <value>0.0.0.0:1036</value>
</property>
<property>
  <name>dfs.datanode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.datanode.kerberos.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.datanode.kerberos.https.principal</name>
  <value>HTTP/_HOST@STA.COM</value>
</property>


<!-- datanode SASL配置 -->
<property>
  <name>dfs.http.policy</name>
  <value>HTTPS_ONLY</value>
</property>
<property>
  <name>dfs.data.transfer.protection</name>
  <value>integrity</value>
</property>

<!--journalnode 配置-->
<property>
  <name>dfs.journalnode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.journalnode.kerberos.internal.spnego.principal</name>
  <value>HTTP/_HOST@STA.COM</value>
</property>

<!--webhdfs-->
<property>
  <name>dfs.webhdfs.enabled</name>
  <value>true</value>
</property>

<property>
  <name>dfs.web.authentication.kerberos.principal</name>
  <value>HTTP/_HOST@STA.COM</value>
</property>

<property>
  <name>dfs.web.authentication.kerberos.keytab</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>

<property>
  <name>dfs.datanode.data.dir.perm</name>
  <value>700</value>
</property>
<property>
  <name>dfs.nfs.kerberos.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.nfs.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.secondary.https.address</name>
  <value>host32:50495</value>
</property>
<property>
  <name>dfs.secondary.https.port</name>
  <value>50495</value>
</property>
<property>
  <name>dfs.secondary.namenode.keytab.file</name>
  <value>/home/udap/app/hadoop-2.6.0-cdh5.7.2/etc/hadoop/udap.keytab</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
<property>
  <name>dfs.secondary.namenode.kerberos.https.principal</name>
  <value>udap/_HOST@STA.COM</value>
</property>
</configuration>

 

 

 7.現在在所有節點安裝kerberos客戶端

yum install krb5-workstation krb5-libs krb5-auth-dialog

 

安裝完后只需要保證/etc/krb5.conf配置文件與KDC相同即可,如果主機名不通要記得在/etc/hosts中添加

8.添加SASL

聽說小於2.6版本的hdfs需要用jsvc來安裝SASL,2.6以后的不用.我直接采用openssl安裝

在host32上執行:

openssl req -new -x509 -keyout test_ca_key -out test_ca_cert -days 9999 -subj '/C=CN/ST=zhejiang/L=hangzhou/O=dtdream/OU=security/CN=zelda.com'

 

將上面生成的test_ca_key和test_ca_cert丟到所有機器上,在各個機器上繼續:

keytool -keystore keystore -alias localhost -validity 9999 -genkey -keyalg RSA -keysize 2048 -dname "CN=zelda.com, OU=test, O=test, L=hangzhou, ST=zhejiang, C=cn"
keytool -keystore truststore -alias CARoot -import -file test_ca_cert
keytool -certreq -alias localhost -keystore keystore -file cert
openssl x509 -req -CA test_ca_cert -CAkey test_ca_key -in cert -out cert_signed -days 9999 -CAcreateserial -passin pass:changeit
keytool -keystore keystore -alias CARoot -import -file test_ca_cert
keytool -keystore keystore -alias localhost -import -file cert_signed

 

目錄下回生成keystore和trustkeystore,拷貝到各個主機上,並配置hadoop的ssl-client.xml和ssl-server.xml配置文件

從.example后綴文件拷貝出來修改即可

9.啟動hdfs,如果有報錯查看hdfs日志和DKC日志:/var/log/krb5kdc.log 和 /var/log/kadmind.log,訪問首頁變為 https://10.1.4.32:50470

10.添加princ並訪問hdfs

kadmin.local
kadmin.local:addprinc udap@STA.COM

 

登錄后訪問

kinit udap@STA.COM

 

如果有異常可以用命令查看當前認證是否有誤:

[udap@host32 hadoop]$ klist
Ticket cache: FILE:/tmp/krb5cc_1005
Default principal: udap@STA.COM

Valid starting       Expires              Service principal
2018-12-14T16:20:55  2018-12-15T16:20:55  krbtgt/STA.COM@STA.COM

 

創建不同用戶登錄,則對應在hdfs上的用戶不同:

[udap@host32 bin]$ ./hdfs dfs -ls /
Picked up _JAVA_OPTIONS: -Djava.net.preferIPv4Stack=true
18/12/14 16:22:07 WARN util.NativeCodeLoader: Unable to load native-hadoop library for your platform... using builtin-java classes where applicable
Found 3 items
drwxr-xr-x   - cgf  cgf                 0 2018-12-14 08:52 /cgf
drwxr-xr-x   - hx   hx                  0 2018-12-14 10:35 /hx
drwxr-xr-x   - udap supergroup          0 2018-12-13 11:33 /udap

 

11.如果需要新增免密登錄keytab文件,需要如下操作:這段摘抄的記錄: https://my.oschina.net/psuyun/blog/333077

運行命令
ktutil
add_entry -password -p hadoop/admin@psy.com -k 3 -e aes256-cts-hmac-sha1-96
解釋:-k 指編號 -e指加密方式 -password 指使用密碼的方式 
例子:
add_entry -password -p host/admin@psy.com -k 1 -e aes256-cts-hmac-sha1-96

write_kt 文件名.keytab
作用將密碼記錄到“文件名.keytab”中
例子:
write_kt /hadoop-data/etc/hadoop/hadoop.keytab

使用無密碼的方式登錄
kinit -kt username.keytab username
例子:
 kinit -kt /hadoop-data/etc/hadoop/hadoop.keytab hadoop/admin
 區別於
kinit -t username.keytab username(這種方式登錄的時候還是提示輸入密碼)

 

3錯誤日志查看

1.使用不同用戶登錄后就在hdfs上實現了多租戶隔離

報錯信息多種多樣,懶得列舉了,反正具體都在日志中查看

2.不知道有沒有什么忘記回顧的,待續...


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM