sqoop的基本語法詳解及可能遇到的錯誤


1 sqoop介紹

Apache Sqoop是專為Apache Hadoop和結構化數據存儲如關系數據庫之間的數據轉換工具的有效工具。你可以使用Sqoop從外部結構化數據存儲的數據導入到Hadoop分布式文件系統或相關系統如Hive和HBase。相反,Sqoop可以用來從Hadoop的數據提取和導出到外部結構化數據存儲如關系數據庫和企業數據倉庫。 
Sqoop專為大數據批量傳輸設計,能夠分割數據集並創建Hadoop任務來處理每個區塊。 
sqoop的安裝和下載可參考該地址

2 查看幫助命令

查看命令幫助(sqoop help 

[hadoop@zhangyu lib]$ sqoop help
usage: sqoop COMMAND [ARGS]

Available commands:
  codegen            Generate code to interact with database records
  create-hive-table Import a table definition into Hive eval Evaluate a SQL statement and display the results export Export an HDFS directory to a database table help List available commands import Import a table from a database to HDFS import-all-tables Import tables from a database to HDFS import-mainframe Import datasets from a mainframe server to HDFS job Work with saved jobs list-databases List available databases on a server list-tables List available tables in a database merge Merge results of incremental imports metastore Run a standalone Sqoop metastore version Display version information See 'sqoop help COMMAND' for information on a specific command. 這里提示我們使用sqoop help command(要查詢的命令)進行該命令的詳細查詢

 

3 list-databases

[hadoop@zhangyu lib]$ sqoop help list-databases

 

  1. –connect jdbc:mysql://hostname:port/database指定mysql數據庫主機名和端口號和數據庫名(默認端口號為3306);
  2. –username : root 指定數據庫用戶名
  3. –password :123456 指定數據庫密碼
[hadoop@zhangyu lib]$ sqoop list-databases \ > --connect jdbc:mysql://localhost:3306 \ > --username root \ > --password 123456 結果: information_schema basic01 mysql performance_schema sqoop test

 

4 list-tables

[hadoop@zhangyu lib]$ sqoop list-tables \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root \ > --password 123456 結果: stu

 

5 將mysql導入HDFS中(import):

(默認導入當前用戶目錄下/user/用戶名/表名) 
說到這里擴展一個小知識點:hdfs dfs -lshdfs dfs -ls \的區別。(自己動手去測試下~~~)

sqoop import --connect jdbc:mysql://localhost/database --username root --password 123456 --table example –m 1
  • 1
  1. –table : example mysql中即將導出的表
  2. -m 1 指定啟動一個map進程,如果表很大,可以啟動多個map進程,默認是4個

這里可能會出現兩個錯誤,如下:

  • 第一個錯誤
18/01/14 16:01:19 ERROR tool.ImportTool: Error during import: No primary key could be found for table stu. Please specify one with --split-by or perform a sequential import with '-m 1'.

 

  • 提示可以看出,在我們從mysql中導出的表沒有設定主鍵,提示我們使用把--split-by或者把參數-m設置為1,這里大家會不會問到,這倒是是為什么呢?

    1. Sqoop通可以過–split-by指定切分的字段,–m設置mapper的數量。通過這兩個參數分解生成m個where子句,進行分段查詢。
    2. split-by 根據不同的參數類型有不同的切分方法,如表共有100條數據其中id為int類型,並且我們指定–split-by id,我們不設置map數量使用默認的為四個,首先Sqoop會取獲取切分字段的MIN()和MAX()即(–split -by),再根據map數量進行划分,這是字段值就會分為四個map:(1-25)(26-50)(51-75)(75-100)。
    3. 根據MIN和MAX不同的類型采用不同的切分方式支持有Date,Text,Float,Integer, Boolean,NText,BigDecimal等等。
    4. 所以,若導入的表中沒有主鍵,將-m 設置稱1或者設置split-by,即只有一個map運行,缺點是不能並行map錄入數據。(注意,當-m 設置的值大於1時,split-by必須設置字段) 。
    5. split-by即便是int型,若不是連續有規律遞增的話,各個map分配的數據是不均衡的,可能會有些map很忙,有些map幾乎沒有數據處理的情況。
  • 第二個錯誤

Exception in thread "main" java.lang.NoClassDefFoundError: org/json/JSONObject at org.apache.sqoop.util.SqoopJsonUtil.getJsonStringforMap(SqoopJsonUtil.java:42) at org.apache.sqoop.SqoopOptions.writeProperties(SqoopOptions.java:742) at org.apache.sqoop.mapreduce.JobBase.putSqoopOptionsToConfiguration(JobBase.java:369) at org.apache.sqoop.mapreduce.JobBase.createJob(JobBase.java:355) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:249) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.lang.ClassNotFoundException: org.json.JSONObject at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) ... 15 more

 

這里我們需要導入java-json.jar包,下載地址,把java-json.jar添加到../sqoop/lib目錄

  • 說了那么多來看我們的第一個導入語句:
[hadoop@zhangyu lib]$ sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username root --password 123456 --table stu 

 

 

生成的日志信息大家一定要好好理解

我們查看HDFS上的文件

[hadoop@zhangyu lib]$ hdfs dfs -ls /user/hadoop/stu Found 4 items -rw-r--r-- 1 hadoop supergroup 0 2018-01-14 17:07 /user/hadoop/stu/_SUCCESS -rw-r--r-- 1 hadoop supergroup 11 2018-01-14 17:07 /user/hadoop/stu/part-m-00000 -rw-r--r-- 1 hadoop supergroup 7 2018-01-14 17:07 /user/hadoop/stu/part-m-00001 -rw-r--r-- 1 hadoop supergroup 9 2018-01-14 17:07 /user/hadoop/stu/part-m-00002 [hadoop@zhangyu lib]$ hdfs dfs -cat /user/hadoop/stu/"part*" 1,zhangsan 2,lisi 3,wangwu 

 

  • 加上參數m
[hadoop@zhangyu lib]$ sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username root --password 123456 --table stu -m 1 

 

 

這里大家可能也會出現一個錯誤,在hdfs上已經存,錯誤如下:

18/01/14 17:52:47 ERROR tool.ImportTool: Encountered IOException running import job: org.apache.hadoop.mapred.FileAlreadyExistsException: Output directory hdfs://192.168.137.200:9000/user/hadoop/stu already exists at org.apache.hadoop.mapreduce.lib.output.FileOutputFormat.checkOutputSpecs(FileOutputFormat.java:146) at org.apache.hadoop.mapreduce.JobSubmitter.checkSpecs(JobSubmitter.java:270) at org.apache.hadoop.mapreduce.JobSubmitter.submitJobInternal(JobSubmitter.java:143) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1307) at org.apache.hadoop.mapreduce.Job$10.run(Job.java:1304) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1693) at org.apache.hadoop.mapreduce.Job.submit(Job.java:1304) at org.apache.hadoop.mapreduce.Job.waitForCompletion(Job.java:1325) at org.apache.sqoop.mapreduce.ImportJobBase.doSubmitJob(ImportJobBase.java:196) at org.apache.sqoop.mapreduce.ImportJobBase.runJob(ImportJobBase.java:169) at org.apache.sqoop.mapreduce.ImportJobBase.runImport(ImportJobBase.java:266) at org.apache.sqoop.manager.SqlManager.importTable(SqlManager.java:692) at org.apache.sqoop.manager.MySQLManager.importTable(MySQLManager.java:118) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:497) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236)

 

  • 刪除目標目錄后在導入,並且指定mapreduce的job的名字 
    參數:--delete-target-dir --mapreduce-job-name
[hadoop@zhangyu lib]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --table stu \ > -m 1

 

 

  • 導入到指定目錄

參數:--target-dir /directory

[hadoop@zhangyu lib]$ sqoop import --connect jdbc:mysql://localhost:3306/sqoop --username root -password 123456 --table stu -m 1 --target-dir /sqoop/

 

查看HDFS上的文件

[hadoop@zhangyu lib]$ hdfs dfs -ls /sqoop Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2018-01-14 18:07 /sqoop/_SUCCESS -rw-r--r-- 1 hadoop supergroup 27 2018-01-14 18:07 /sqoop/part-m-00000 [hadoop@zhangyu lib]$ hdfs dfs -cat /sqoop/part-m-00000 1,zhangsan 2,lisi 3,wangwu

 

  • 指定字段之間的分隔符 
    參數--fields-terminated-by
[hadoop@zhangyu lib]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table stu \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 HDFS上查詢結果: [hadoop@zhangyu lib]$ hdfs dfs -ls /user/hadoop/stu/ Found 2 items -rw-r--r-- 1 hadoop supergroup 0 2018-01-14 19:47 /user/hadoop/stu/_SUCCESS -rw-r--r-- 1 hadoop supergroup 27 2018-01-14 19:47 /user/hadoop/stu/part-m-0000 [hadoop@zhangyu lib]$ hdfs dfs -cat /user/hadoop/stu/part-m-00000 1 zhangsan 2 lisi 3 wangwu (字段之間變為空格)

 

  • 如果表中的字段為null轉化為0

參數--null-non-string 
1. –null-string含義是 string類型的字段,當Value是NULL,替換成指定的字符 
2. –null-non-string 含義是非string類型的字段,當Value是NULL,替換成指定字符先

導入薪資表
[hadoop@zhangyu lib]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table sal \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 查詢結果: [hadoop@zhangyu lib]$ hdfs dfs -cat /user/hadoop/sal/part-m-00000 zhangsan 1000 lisi 2000 wangwu null 加上參數`--null-string [hadoop@zhangyu lib]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table sal \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 \ > --null-string 0 查看結果 [hadoop@zhangyu lib]$ hdfs dfs -cat /user/hadoop/sal/part-m-00000 zhangsan 1000 lisi 2000 wangwu 0

 

  • 導入表中的部分字段 
    參數--columns
[hadoop@zhangyu ~]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table stu \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 \ > --null-string 0 \ > --columns "name" 查詢結果: [hadoop@zhangyu ~]$ hdfs dfs -cat /user/hadoop/stu/part-m-00000 zhangsan lisi wangwu

 

  • 按條件導入數據 
    參數--where
[hadoop@zhangyu ~]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table stu \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 \ > --null-string 0 \ > --columns "name" \ > --target-dir STU_COLUMN_WHERE \ > --where 'id<3' 查詢結果: zhangsan lisi

 

  • 按照sql語句進行導入 
    參數--query 
    使用--query關鍵字,就不能使用--table--columns 
    自定義sql語句的where條件中必須包含字符串 $CONDITIONS$CONDITIONS是一個變量,用於給多個map任務划分任務范 圍;
sqoop import \
--connect jdbc:mysql://localhost:3306/sqoop \ --username root --password 123456 \ --mapreduce-job-name FromMySQL2HDFS \ --delete-target-dir \ --fields-terminated-by '\t' \ -m 1 \ --null-string 0 \ --target-dir STU_COLUMN_QUERY \ --query "select * from stu where id>1 and \$CONDITIONS" (或者quer使用這種格式:--query 'select * from emp where id>1 and $CONDITIONS') 

 

結果:

2       lisi
3       wangwu

6 在文件中執行

  • 創建文件sqoop-import-hdfs.txt
[hadoop@zhangyu data]$ vi sqoop-import-hdfs.txt                                   
import
--connect jdbc:mysql://localhost:3306/sqoop --username root --password 123456 --table stu --target-dir STU_option_file

 

  • 執行
[hadoop@zhangyu data]$ sqoop --option-file /home/hadoop/data/sqoop-import-hdfs.txt 查詢結果: [hadoop@zhangyu data]$ hdfs dfs -cat STU_option_file/"part*" 1,zhangsan 2,lisi 3,wangwu

 

7 eval

查看幫助命令對與該命令的解釋為: Evaluate a SQL statement and display the results,也就是說執行一個SQL語句並查詢出結果。

[hadoop@zhangyu data]$ sqoop eval \
> --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --query "select * from stu" Warning: /opt/software/sqoop/../hbase does not exist! HBase imports will fail. Please set $HBASE_HOME to the root of your HBase installation. Warning: /opt/software/sqoop/../hcatalog does not exist! HCatalog jobs will fail. Please set $HCAT_HOME to the root of your HCatalog installation. Warning: /opt/software/sqoop/../accumulo does not exist! Accumulo imports will fail. Please set $ACCUMULO_HOME to the root of your Accumulo installation. Warning: /opt/software/sqoop/../zookeeper does not exist! Accumulo imports will fail. Please set $ZOOKEEPER_HOME to the root of your Zookeeper installation. 18/01/14 21:35:25 INFO sqoop.Sqoop: Running Sqoop version: 1.4.6-cdh5.7.0 18/01/14 21:35:25 WARN tool.BaseSqoopTool: Setting your password on the command-line is insecure. Consider using -P instead. 18/01/14 21:35:26 INFO manager.MySQLManager: Preparing to use a MySQL streaming resultset. -------------------------------------- | id | name | -------------------------------------- | 1 | zhangsan | | 2 | lisi | | 3 | wangwu | --------------------------------------

 

8 HDFS數據導出到MySQL(Hive中的數據導入到MySQL)

導出HDFS上的sal數據,查詢數據:

[hadoop@zhangyu data]$ hdfs dfs -cat sal/part-m-00000 zhangsan 1000 lisi 2000 wangwu 0

 

  • 在執行導出語句前先創建sal_demo表(不創建表會報錯):
mysql> create table sal_demo like sal;

 

  • 導出語句:
[hadoop@zhangyu data]$ sqoop export \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root \ > --password 123456 \ > --table sal_demo \ > --input-fields-terminated-by '\t'\ > --export-dir /user/hadoop/sal/

 

  1. –table sal_demo :指定導出表的名稱;
  2. –input-fields-terminated-by:可以用來指定hdfs上文件的分隔符,默認是逗號(查詢數據室可以看出我是用的是\t,所以這里指定為\t ,這里大家小心可能因為分隔符的原因報錯)
  3. –export-dir :導出數據的目錄。
結果:
mysql> select * from sal_demo; +----------+--------+ | name | salary | +----------+--------+ | zhangsan | 1000 | | lisi | 2000 | | wangwu | 0 | +----------+--------+ 3 rows in set (0.00 sec)

 

(如果在導入一次會追加在表中)

  • 插入中文亂碼問題
sqoop export --connect "jdbc:mysql://localhost:3306/test?useUnicode=true&characterEncoding=utf-8" --username root --password 123456 --table sal -m 1 --export-dir /user/hadoop/sal/ 

 

 

  • 指定導出的字段 
    --columns <col,col,col...>
[hadoop@zhangyu data]$ sqoop export \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root \ > --password 123456 \ > --table sal_demo3 \ > --input-fields-terminated-by '\t' \ > --export-dir /user/hadoop/sal/ \ > --columns name

 

 查詢結果:

mysql> select * from sal_demo3  
    -> ;
+----------+--------+
| name | salary | +----------+--------+ | zhangsan | NULL | | lisi | NULL | | wangwu | NULL | +----------+--------+ 3 rows in set (0.00 sec)

 

9 MySQL的中的數據導入到Hive中

  • 執行導入語句
[hadoop@zhangyu ~]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table stu \ > --create-hive-table \ > --hive-database hive \ > --hive-import \ > --hive-overwrite \ > --hive-table stu_import \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 \ > --null-non-string 0

 

–create-hive-table :創建目標表,如果有會報錯; 
–hive-database:指定hive數據庫; 
–hive-import :指定導入hive(沒有這個條件導入到hdfs中); 
–hive-overwrite :覆蓋; 
–hive-table stu_import :指定hive中表的名字,如果不指定使用導入的表的表名。

這里可能會報錯,錯誤如下:

18/01/15 01:29:28 ERROR hive.HiveConfig: Could not load org.apache.hadoop.hive.conf.HiveConf. Make sure HIVE_CONF_DIR is set correctly. 18/01/15 01:29:28 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:50) at org.apache.sqoop.hive.HiveImport.getHiveArgs(HiveImport.java:392) at org.apache.sqoop.hive.HiveImport.executeExternalHiveScript(HiveImport.java:379) at org.apache.sqoop.hive.HiveImport.executeScript(HiveImport.java:337) at org.apache.sqoop.hive.HiveImport.importTable(HiveImport.java:241) at org.apache.sqoop.tool.ImportTool.importTable(ImportTool.java:514) at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:605) at org.apache.sqoop.Sqoop.run(Sqoop.java:143) at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70) at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:179) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:218) at org.apache.sqoop.Sqoop.runTool(Sqoop.java:227) at org.apache.sqoop.Sqoop.main(Sqoop.java:236) Caused by: java.lang.ClassNotFoundException: org.apache.hadoop.hive.conf.HiveConf at java.net.URLClassLoader.findClass(URLClassLoader.java:381) at java.lang.ClassLoader.loadClass(ClassLoader.java:424) at sun.misc.Launcher$AppClassLoader.loadClass(Launcher.java:331) at java.lang.ClassLoader.loadClass(ClassLoader.java:357) at java.lang.Class.forName0(Native Method) at java.lang.Class.forName(Class.java:264) at org.apache.sqoop.hive.HiveConfig.getHiveConf(HiveConfig.java:44) ... 12 more

 

網上找的資料基本都在說配置個人環境變量,並沒有卵用,到hive目錄的lib下拷貝幾個jar包,問題就解決了!

[hadoop@zhangyu lib]$ cp hive-common-1.1.0-cdh5.7.0.jar /opt/software/sqoop/lib/ [hadoop@zhangyu lib]$ cd hive-shims* /opt/software/sqoop/lib/

 

  • 查看hive中導入的數據
hive> show tables;
OK
stu_import
Time taken: 0.067 seconds, Fetched: 1 row(s) hive> select * from emp_import > ; OK 1 zhangsan 2 lisiw 3 wangwu

 

導入Hive不建議大家使用–create-hive-table,建議事先創建好hive表 
使用create創建表后,我們可以查看字段對應的類型,發現有些並不是我們想要的類型,所以我們要事先創建好表的結構再導入數據。

  • 導入到hive指定分區
--hive-partition-key <partition-key> Sets the partition key to use when importing to hive --hive-partition-value <partition-value> Sets the partition value to use when importing to hive

 

  • 示例:
[hadoop@zhangyu lib]$ sqoop import \ > --connect jdbc:mysql://localhost:3306/sqoop \ > --username root --password 123456 \ > --table stu \ > --create-hive-table \ > --hive-database hive \ > --hive-import \ > --hive-overwrite \ > --hive-table stu_import1 \ > --mapreduce-job-name FromMySQL2HDFS \ > --delete-target-dir \ > --fields-terminated-by '\t' \ > -m 1 \ > --null-non-string 0 \ > --hive-partition-key dt \ > --hive-partition-value "2018-08-08"
  • hive上進行查詢
hive> select * from stu_import1; OK 1 zhangsan 2018-08-08 2 lisi 2018-08-08 3 wangwu 2018-08-08 Time taken: 0.121 seconds, Fetched: 3 row(s)

 

 

10 sqoop job的使用

就是把sqoop執行的語句變成一個job,並不是在創建語句的時候執行,你可以查看該job,可以任何時候執行該job,也可以刪除job,這樣就方便我們進行任務的調度

--create <job-id> 創建一個新的job. --delete <job-id> 刪除job --exec <job-id> 執行job --show <job-id> 顯示job的參數 --list 列出所有的job

 

 

  • 創建一個job
sqoop job --create person_job1 -- import --connect jdbc:mysql://localhost:3306/sqoop \
--username root \ --password 123456 \ --table sal_demo3 \ -m 1 \ --delete-target-dir 

 

 

  • 查看可用的job
[hadoop@zhangyu lib]$ sqoop job --list Available jobs: person_job1

 

 

  • 執行person_job完成導入
[hadoop@zhangyu lib]$ sqoop job --exec person_job1 [hadoop@zhangyu lib]$ hdfs dfs -ls Found 6 items drwxr-xr-x - hadoop supergroup 0 2018-01-14 20:40 EMP_COLUMN_WHERE drwxr-xr-x - hadoop supergroup 0 2018-01-14 20:49 STU_COLUMN_QUERY drwxr-xr-x - hadoop supergroup 0 2018-01-14 20:45 STU_COLUMN_WHERE drwxr-xr-x - hadoop supergroup 0 2018-01-14 21:10 STU_option_file drwxr-xr-x - hadoop supergroup 0 2018-01-14 20:24 sal drwxr-xr-x - hadoop supergroup 0 2018-01-15 03:08 sal_demo3
  • 問題:執行person_job的時候,需要輸入數據庫的密碼,怎么樣能不輸入密碼呢?
  • 配置sqoop-site.xml
 <property> <name>sqoop.metastore.client.record.password</name> <value>true</value> <description>If true, allow saved passwords in the metastore. </description> </property> 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM