轉載請注明出處 :http://www.cnblogs.com/xiaodf/
本文檔主要用於說明,如何在集群外節點上,部署大數據平台的客戶端,此大數據平台已經開啟了Kerberos身份驗證。通過客戶端用戶在集群外就可以使用集群內的服務了,如查詢集群內的hdfs數據,提交spark任務到集群內執行等操作。
具體部署步驟如下所示:
1、拷貝集群內hadoop相關組件包到客戶端
本地創建目錄/opt/cloudera/parcels
mkdir –R /opt/cloudera/parcels
拷貝組件包CDH-5.7.2-1.cdh5.7.2.p0.18到目錄/opt/cloudera/parcels
進入目錄建立軟連接
cd /opt/cloudrea/parcels
ln –s CDH-5.7.2-1.cdh5.7.2.p0.18 CDH
2、拷貝集群內hadoop相關配置文件到客戶端
創建目錄/etc/hadoop,將/etc/hadoop/conf文件夾放入該目錄,node1為集群內節點
mkdir /etc/hadoop
scp -r node1:/etc/hadoop/conf /etc/hadoop
創建目錄/etc/hive,將/etc/hive/conf文件夾放入該目錄
mkdir /etc/hive
scp -r node1:/etc/hive/conf /etc/hive
創建目錄/etc/spark,將/etc/spark/conf文件夾放入該目錄
mkdir /etc/spark
scp -r node1:/etc/spark/conf /etc/spark
3、拷貝集群內身份驗證相關配置文件krb5.conf到客戶端
scp node1:/etc/krb5.conf /etc
4、運行客戶端腳本client.sh,文件內容如下:
export HADOOP_HOME=/opt/cloudera/parcels/CDH/lib/hadoop
export HADOOP_CONF=/etc/hadoop/conf
export HADOOP_CONF_DIR=/etc/hadoop/conf
export YARN_CONF_DIR=/etc/hadoop/conf
export SPARK_CONF_DIR=/etc/spark/conf
#export SPARK_HOME=/opt/cloudera/parcels/CDH/lib/spark
CDH_HOME="/opt/cloudera/parcels/CDH"
export PATH=$CDH_HOME/bin:$HADOOP_HOME/bin:$HADOOP_HOME/sbin/:$PATH
##beeline 連接hive進行sql查詢
cd /opt/cloudera/parcels/CDH/bin
./beeline -u "jdbc:hive2://node7:10000/;principal=hive/node7@HADOOP.COM" --config /etc/hive/conf
##執行hdfs命令
#./hdfs --config /etc/hadoop/conf dfs -ls /
##提交spark命令
#cd /opt/cloudera/parcels/CDH/lib/spark/bin
#./spark-shell
注意:
1、客戶端要與集群時間同步,否則身份認證會失敗;
2、客戶端host要添加集群hosts,集群hosts可連接集群某一點獲取;
3、集群已開啟kerberos身份驗證,執行shell命令前,需要kinit進行身份驗證,示例如下:
#kinit認證命令
[root@node5 client]# kinit -kt /home/user01.keytab user01
#查看當前用戶
[root@node5 client]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: user01@HADOOP.COM
Valid starting Expires Service principal
12/01/2016 20:48:50 12/02/2016 20:48:50 krbtgt/HADOOP.COM@HADOOP.COM
renew until 12/08/2016 20:48:50
4、spark jdbc編程,同樣需要調用kerberos身份驗證,示例如下,完整工程看【spark jdbc 示例】目錄下Security
package kerberos.spark;
import org.apache.hadoop.security.UserGroupInformation;
import java.io.IOException;
import java.sql.Connection;
import java.sql.DriverManager;
import java.sql.ResultSet;
import java.sql.Statement;
import java.util.Timer;
import java.util.TimerTask;
/*
* 開啟權限驗證時,可以傳入用戶principal 和 keytab 進行身份驗證
*/
public class sparkjdbc {
public static void main(String args[]) {
final String principal = args[0];//用戶對應principal,如user01
final String keytab = args[1];//用戶對應keytab,如/home/user01/user01.keytab
String sql = args[2];//業務sql操作語句
try {
//1、身份驗證:間隔12小時驗證一次
long interval=1;
long now = System.currentTimeMillis();
long start = interval - now % interval;
Timer timer = new Timer();
timer.schedule(new TimerTask(){
public void run() {
org.apache.hadoop.conf.Configuration conf = new org.apache.hadoop.conf.Configuration();
conf.set("hadoop.security.authentication", "Kerberos");
UserGroupInformation.setConfiguration(conf);
try {
UserGroupInformation.loginUserFromKeytab(principal,keytab);
System.out.println("getting connection");
System.out.println("current user: "+UserGroupInformation.getCurrentUser());
System.out.println("login user: "+UserGroupInformation.getLoginUser());
} catch (IOException e) {
e.printStackTrace();
}
System.out.println("execute task!"+ this.scheduledExecutionTime());
}
},start,12*60*60*1000);//定時任務
//正常業務,spark jdbc連接hive進行sql操作
Class.forName("org.apache.hive.jdbc.HiveDriver");
Connection con = DriverManager
.getConnection("jdbc:hive2://node7:10000/;principal=hive/node7@HADOOP.COM");
System.out.println("got connection");
Statement stmt = con.createStatement();
ResultSet rs = stmt.executeQuery(sql);// executeQuery會返回結果的集合,否則返回空值
System.out.println("打印輸出結果:");
while (rs.next()) {
System.out.println(rs.getString(1));// 入如果返回的是int類型可以用getInt()
}
con.close();
} catch (Exception e) {
e.printStackTrace();
}
}
}