CDH集群添加Kerberos並使用Java代碼調用HDFS和Spark on YARN
0x0 背景
由於CDH默認的權限管理機制過於簡單,不能夠保證HADOOP集群的安全性,因此,引入了Kerberos作為安全管理服務。
0x1 安裝kerberos服務
CDH提供了關於整合kerberos服務的向導,在整合kerberos之前,必須要有kerberos服務。下面,介紹一下如何安裝kerberos服務。
1. 安裝kerberos server和kdc(Key Distribution Center)
$ sudo apt-get install krb5-kdc krb5-admin-server $ sudo dpkg-reconfigure krb5-kdc
安裝過程中會問你設置默認realm,一般設置域名大寫,例如:
EXAMPL.COM
2. 安裝完成后,會生成一些配置文件,常用的如下:
默認的KDC配置文件路徑: /etc/krb5kdc/kdc.conf 用戶權限控制列表(ACL)路徑:/etc/krb5kdc/kadm5.acl 還有krb配置:/etc/krb5.conf
修改krb5.conf,添加剛才設置的realm對應的kdc和server的ip:
[realms] HDSC.COM = { kdc = 192.168.0.1 admin_server = 192.168.0.1 }
然后修改kadm5.acl 文件,將acl列表設置如下:
# This file Is the access control list for krb5 administration. # When this file is edited run /etc/init.d/krb5-admin-server restart to activate # One common way to set up Kerberos administration is to allow any principal # ending in /admin is given full administrative rights. # To enable this, uncomment the following line: */admin *
然后重啟kerberos服務:
/etc/init.d/krb5-admin-server restart
3. 接下來創建kerberos數據庫:
sudo krb5_newrealm
期間會讓你輸入密碼,這個密碼要記住。
當Kerberos database創建好后,可以看到目錄 /var/kerberos/krb5kdc 下生成了許多新文件。如果想要刪除重建,執行以下命令:
rm -rf /var/lib/krb5kdc/principal* sudo krb5_newrealm
4. 在maste服務器上創建admin/admin用戶:
kadmin.local -q "addprinc admin/admin"
然后就可以登錄了:
kadmin -p admin/admin
剛才在acl中設置了所有*/admin用戶有管理員權限,該用戶將來可以給CDH使用,Cloudera Manager將使用該管理員用戶來創建其他相關的principal。
5. 常用的命令
#Add a user: kadmin: addprinc user #The default realm name is appended to the principal's name by default #Delete a user: kadmin: delprinc user #List principals: kadmin: listprincs #Add a service: kadmin: addprinc service/server.fqdn #The default realm name is appended to the principal's name by default #Delete a user: kadmin: delprinc service/server.fqdn
至此,Kerberos安裝完畢。
0x2 CDH整合Kerberos
1. 首先,集群中各個節點還是需要一些必備的軟件,官網給出下圖:
筆者在ubuntu16.04下進行安裝,執行以下命令:
在Cloudera Manager Server節點上:
sudo apt-get install ldap-utils sudo apt-get install krb5-user
在Agent節點上:
sudo apt-get install krb5-user
記得要修改/etc/krb5.conf,添加kerberos服務器的realm地址,例如:
[realms] HDSC.COM = { kdc = 192.168.0.1 admin_server = 192.168.0.1 }
然后測試是否成功:
$ kinit admin/admin Password for admin/admin@HDSC.COM: $ klist Ticket cache: FILE:/tmp/krb5cc_0 Default principal: admin/admin@HDSC.COM Valid starting Expires Service principal 04/12/2018 21:57:08 04/13/2018 07:57:08 krbtgt/HDSC.COM@HDSC.COM renew until 04/13/2018 21:57:06
2. 關於AES-256加密
由於jdk8_161以下版本不支持aes-256加密,所以需要安裝拓展包,或者干脆把aes-256加密給關閉掉。
關於安裝擴展包,參看官方文檔:
https://www.cloudera.com/documentation/enterprise/5-12-x/topics/cm_sg_s2_jce_policy.html
筆者采用另一種方式:關閉kerberos的aes-256加密。
打開kdc.conf配置文件:
vim /etc/krb5kdc/kdc.conf
看到以下內容,修改其中的supported_enctypes,將aes256-cts:normal刪除掉:
[kdcdefaults]
kdc_ports = 750,88 [realms] HDSC.COM = { database_name = /var/lib/krb5kdc/principal admin_keytab = FILE:/etc/krb5kdc/kadm5.keytab acl_file = /etc/krb5kdc/kadm5.acl key_stash_file = /etc/krb5kdc/stash kdc_ports = 750,88 max_life = 10h 0m 0s max_renewable_life = 7d 0h 0m 0s master_key_type = des3-hmac-sha1 supported_enctypes = aes256-cts:normal arcfour-hmac:normal des3-hmac-sha1:normal des-cbc-crc:normal des:normal des:v4 des:norealm des:onlyrealm des:afs3 default_principal_flags = +preauth }
然后重啟kdc可kerberos-admin-server
/etc/init.d/krb5-kdc restart /etc/init.d/krb5-admin-server restart
但是,這時候出現了一些問題,客戶端連不上了…
按照官方文檔,需要重建kerberos數據庫(參照上文所述),然后重啟kdc和kerberos-admin-server,問題解決!
檢查一下加密方式是否改變,可以在客戶端主機上重新kinit以下,然后klist -e查看信息:
$ kinit admin/admin $ klist -e
如果輸出如下,則說明aes-256關閉了:
Ticket cache: FILE:/tmp/krb5cc_800 Default principal: admin/admin@HDSC.COM Valid starting Expires Service principal 04/13/2018 09:31:53 04/13/2018 19:31:53 krbtgt/HDSC.COM@HDSC.COM renew until 04/14/2018 09:31:52, Etype (skey, tkt): des3-cbc-sha1, arcfour-hmac
如果沒有關閉,輸出類似下面,可見Etype中報刊AES-256:
Ticket cache: FILE:/tmp/krb5cc_0 Default principal: test@Cloudera Manager Valid starting Expires Service principal 05/19/11 13:25:04 05/20/11 13:25:04 krbtgt/Cloudera Manager@Cloudera Manager Etype (skey, tkt): AES-256 CTS mode with 96-bit SHA-1 HMAC, AES-256 CTS mode with 96-bit SHA-1 HMAC
3. 可以登錄cloudera manager進行配置了
按下圖順序進行配置:
1.啟用Kerberos
2.都打對勾
3.設置kdc相關屬性
4.搞不懂,不點
5.填寫用戶名密碼
點擊繼續,就開始安裝了。
0x3 開始使用
首先驗證一下,我們的kerberos是否起作用。
最簡單的方式就是登陸hdfs的web頁面查看文件夾,你會發現無法訪問了:
然后,我們在hadoop集群上,執行如下hadoop命令:
#先登陸Kerberos服務器,添加一個hdfs用戶: $ kadmin Authenticating as principal admin/admin@HDSC.COM with password. Password for admin/admin@HDSC.COM: #添加一個叫做hdfs的用戶(principal應該是hdfs@HDSC.COM) kadmin: addprinc hdfs #列出所有 kadmin: listprincs #會看到,hdfs用戶已經添加: admin/admin@HDSC.COM cloudera-scm/admin@HDSC.COM hdfs/csut2hdfs115vl@HDSC.COM hdfs/csut2hdfs116vl@HDSC.COM hdfs/csut2hdfs117vl@HDSC.COM hdfs/csut2hdfs118vl@HDSC.COM hdfs@HDSC.COM hive/csut2hdfs115vl@HDSC.COM hue/csut2hdfs115vl@HDSC.COM kadmin/admin@HDSC.COM kadmin/changepw@HDSC.COM kadmin/csut2hdfs115vl@HDSC.COM kiprop/csut2hdfs115vl@HDSC.COM krbtgt/HDSC.COM@HDSC.COM mapred/csut2hdfs115vl@HDSC.COM ... #退出kadmin kadmin: exit #然后設置hdfs為默認principal $ kinit hdfs Ticket cache: FILE:/tmp/krb5cc_800 Default principal: hdfs@HDSC.COM Valid starting Expires Service principal 04/13/2018 15:50:31 04/14/2018 01:50:31 krbtgt/HDSC.COM@HDSC.COM renew until 04/14/2018 15:50:29 #接下來我們就可以以hdfs用戶來操作hadoop了 $ hadoop fs -ls / Found 3 items drwxr-xr-x - hdfs supergroup 0 2018-03-09 08:46 /opt drwxrwxrwt - hdfs supergroup 0 2018-04-13 11:25 /tmp drwxr-xr-x - hdfs supergroup 0 2018-04-13 15:15 /user $ hadoop fs -mkdir /tmp/test01 $ hadoop fs -ls /tmp Found 5 items drwxrwxrwx - hdfs supergroup 0 2018-04-13 15:51 /tmp/.cloudera_health_monitoring_canary_files drwxr-xr-x - yarn supergroup 0 2018-04-13 11:25 /tmp/hadoop-yarn drwx-wx-wx - hive supergroup 0 2018-03-09 10:20 /tmp/hive drwxrwxrwt - mapred hadoop 0 2018-04-13 11:25 /tmp/logs drwxr-xr-x - hdfs supergroup 0 2018-04-13 15:52 /tmp/test01 #看來讀寫權限都有了
0x4 JAVA代碼調用HDFS
接下來看下在java中如何使用hdfs:
經過一番折騰,總結如下:
1. 要有keytab文件,也就類似於證書,下載到你本地。
生成方式是在kdc-server上執行命令:
kadmin: ktadd -k /opt/hdfs.keytab hdfs
其中-k指定生成的keytab路徑,hdfs是principal用戶名。執行完成后,/opt目錄會產生一個hdfs.keytab文件。值得注意的是,經測試,生成keytab文件后,hdfs就無法用密碼登陸了!在客戶端如果想登錄,需要用以下命令:
$ kinit -kt /opt/hdfs.keytab hdfs
2. 要有kerberos配置文件:/etc/krb5.conf,下載到你本地。
3. 要把hdfs-site.xml和core-site.xml下載到本地(/etc/hadoop/conf目錄下)。
4. 我將以上四個文件放在D盤下,添加如下java代碼:
(PS:其實,最好把hive-site、yarn-site、mapred-site…全放到本地resource目錄下,最保險,而且代碼也不用寫太多)
//設置krb的配置文件路徑到環境變量 System. setProperty("java.security.krb5.conf", "D:/krb5.conf" ); Configuration conf = new Configuration(); //添加hdfs-site.xml到conf conf.addResource(new Path("D:/hdfs-site.xml")); //添加core-site.xml到conf conf.addResource(new Path("D:/core-site.xml")); //設置hdfs的url conf.set("fs.defaultFS", "hdfs://192.168.0.115:8020"); //登錄 UserGroupInformation.setConfiguration(conf); UserGroupInformation.loginUserFromKeytab("hdfs@HDSC.COM", "D:/krb5.keytab"); //獲取FileSystem FileSystem fs = FileSystem.get(conf);
0x5 JAVA代碼使用Spark訪問HDFS
1. 同上一節所示,需要各種配置文件:
kerberos的證書文件和配置文件:keytab、krb5.conf
hadoop的全套配置:hive-site.xml、yarn-site.xml、mapred-site.xml、core-site.xml
2. maven依賴一定要和你的hadoop版本以及spark版本一致!
因為這個bug,找了3天沒找到:
我是用的spark-yarn 2.1.0中依賴的hadoop-yarn-api版本為2.2.0,而我使用的hadoop版本是2.6.0,結果調用spark on yarn時總是出錯,還沒有相關日志輸出,煩。
3. 上代碼吧
記得要把spark目錄下的/jars文件放到hdfs上,然后添加配置: .config("spark.yarn.archive", "hdfs://hdfs-host:port/user/spark/jars")
省的每次提交job都要把spark依賴的jar包上傳到hdfs,可以提升效率。
//登錄kerberos
System. setProperty("java.security.krb5.conf", "D:/krb5.conf"); UserGroupInformation.loginUserFromKeytab("hdfs@HDSC.COM", "D:/hdfs.keytab"); System.out.println(UserGroupInformation.getLoginUser()); //啟動spark on yarn client模式 SparkSession spark = SparkSession.builder() .master("yarn") .config("spark.yarn.archive", "hdfs://192.168.0.115:8020/user/spark/jars") // .master("local") .appName("CarbonData") .getOrCreate(); System.out.println("------------------------啟動spark on yarn---------------------"); spark.read() .textFile("hdfs://192.168.0.115:8020/opt/guoxiang/event_log_01.csv") .show();
0x6 Trouble Shoting
1. 用戶id小於1000
異常信息:
main : run as user is hdfs main : requested yarn user is hdfs Requested user hdfs is not whitelisted and has id 986,which is below the minimum allowed 1000
解決方法:
打開Cloudera Manager,選擇Yarn->配置->min.user.id,將1000改為0。
2. Requested user hdfs is banned
main : run as user is hdfs main : requested yarn user is hdfs Requested user hdfs is banned
解決方法:
打開Cloudera Manager,選擇Yarn->配置->banned.users,將hdfs從黑名單刪除。
3. 找不到文件/etc/hadoop/conf.cloudera.yarn/topology.py
錯誤信息:
WARN net.ScriptBasedMapping: Exception running /etc/hadoop/conf.cloudera.yarn/topology.py csut2hdfs117vl java.io.IOException: Cannot run program "/etc/hadoop/conf.cloudera.yarn/topology.py" (in directory "D:\workspace\sparkapi"): CreateProcess error=2, 系統找不到指定的文件。 at java.lang.ProcessBuilder.start(ProcessBuilder.java:1048) at org.apache.hadoop.util.Shell.runCommand(Shell.java:482) at org.apache.hadoop.util.Shell.run(Shell.java:455) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:715) at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.runResolveCommand(ScriptBasedMapping.java:251) at org.apache.hadoop.net.ScriptBasedMapping$RawScriptBasedMapping.resolve(ScriptBasedMapping.java:188) at org.apache.hadoop.net.CachedDNSToSwitchMapping.resolve(CachedDNSToSwitchMapping.java:119)
解決方法:
將core-site.xml中的
<property>
<name>net.topology.script.file.name</name> <value>/etc/hadoop/conf.cloudera.yarn/topology.py</value> </property>
注釋掉。
4. 無法實例化hive
錯誤信息:
Exception in thread "main" java.lang.IllegalArgumentException: Error while instantiating 'org.apache.spark.sql.hive.HiveExternalCatalog': at org.apache.spark.sql.internal.SharedState$.org$apache$spark$sql$internal$SharedState$$reflect(SharedState.scala:169) at org.apache.spark.sql.internal.SharedState.<init>(SharedState.scala:86) at org.apache.spark.sql.CarbonSession$$anonfun$sharedState$1.apply(CarbonSession.scala:54) at org.apache.spark.sql.CarbonSession$$anonfun$sharedState$1.apply(CarbonSession.scala:54) ...... Caused by: java.io.IOException: 拒絕訪問。 at java.io.WinNTFileSystem.createFileExclusively(Native Method) at java.io.File.createTempFile(File.java:2024) at org.apache.hadoop.hive.ql.session.SessionState.createTempFile(SessionState.java:818) at org.apache.hadoop.hive.ql.session.SessionState.start(SessionState.java:513) ... 35 more
解決方法:
#執行本地hadoop下bin目錄(window客戶端): winutil chmod -R 777 /tmp #然后java代碼中添加: System.setProperty("HADOOP_USER_NAME", "hdfs"); System.setProperty("user.name", "hdfs");
5. 集成carbondata注意事項
要把carbondata的jar包上傳到hdfs://hdfs-host:port/user/spark/jars
,這里路徑自己設置,然后注意實例化spark-session的時候要配置:
.config("spark.yarn.archive", "hdfs://hdfs-host:port/user/spark/jars")
6. 服務器端有無效的principal
異常信息
Server has invalid Kerberos principal: hdfs/cdevhdfs181vl@HDSC.COM
其實這個異常是由於本地的dns解析導致的錯誤,把hosts文件中主機名和ip一一對應就好了。
7.
異常信息:
先報Netty連接異常:
Error sending result RpcResponse{requestId=5249573718471368854, body=NioManagedBuffer{buf=java.nio.HeapByteBuffer[pos=0 lim=47 cap=64]}} to /192.168.0.184:23386; closing connection java.lang.AbstractMethodError: null at io.netty.util.ReferenceCountUtil.touch(ReferenceCountUtil.java:77) ~[netty-all-4.1.23.Final.jar:4.1.23.Final] at io.netty.channel.DefaultChannelPipeline.touch(DefaultChannelPipeline.java:116) ~[netty-all-4.1.23.Final.jar:4.1.23.Final] at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:810) [netty-all-4.1.23.Final.jar:4.1.23.Final] at io.netty.channel.AbstractChannelHandlerContext.write(AbstractChannelHandlerContext.java:723) [netty-all-4.1.23.Final.jar:4.1.23.Final]
然后又報以下異常:
Diagnostics: Exception from container-launch. Container id: container_1524019039260_0028_02_000001 Exit code: 10 Stack trace: ExitCodeException exitCode=10: at org.apache.hadoop.util.Shell.runCommand(Shell.java:601) at org.apache.hadoop.util.Shell.run(Shell.java:504) at org.apache.hadoop.util.Shell$ShellCommandExecutor.execute(Shell.java:786) at org.apache.hadoop.yarn.server.nodemanager.LinuxContainerExecutor.launchContainer(LinuxContainerExecutor.java:373) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:302) at org.apache.hadoop.yarn.server.nodemanager.containermanager.launcher.ContainerLaunch.call(ContainerLaunch.java:82) at java.util.concurrent.FutureTask.run(FutureTask.java:266) at java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1149) at java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:624) at java.lang.Thread.run(Thread.java:748)
進測試,發現是由於springboot項目中的netty-all和hadoop的netty-all版本有沖突導致的,在maven中鎖定netty-all版本,使其同hadoop自帶的netty-all版本一致即可。
找到一個很好的博文,推薦看這篇:
https://blog.csdn.net/u011026329/article/details/79167884
CDH安裝kerberos官方文檔:
https://www.cloudera.com/documentation/enterprise/5-12-x/topics/cm_sg_intro_kerb.html
Ubuntu安裝kerberos官方文檔:
https://help.ubuntu.com/community/Kerberos