一、參考文檔:
1、https://www.rittmanmead.com/blog/2014/03/using-sqoop-for-loading-oracle-data-into-hadoop-on-the-bigdatalite-vm/
2、http://www.cnblogs.com/bjtu-leefon/archive/2013/06/28/3160549.html
二、使用腳本
----sqoop import zdsd
使用注意:
1、執行節點 Sqoop_home/lib 里添加目標數據庫的jar
2、-m 表示並發啟動map的數量 -m 1表示啟動一個map, 指定-m > 1時,必須添加 --split-by 指定分割列,
,分割列要求至少滿足兩個條件中任一個:1)目標表有主鍵 2)有num 類型或者date類型,因為會執行 min(split_column),max(split_column)操作,決定如何分割
否則無法啟用多個map
3、指定map數量不應過大,不然會增加數據源的壓力
4、執行復雜sql 需要使用 --query 參數
sqoop import --connect jdbc:oracle:thin:@10.**.**.**:1521/jwy --username ** --password ** --table ZHZY.ZDSF --target-dir '/hawq_external/zdsd_sqoop_test1' -m 1
----kkxx -m > 1 必須有主鍵,,,如果沒有主鍵則必須指定split-by 分布列,,,,split-by
---Generating splits for a textual index column allowed only in case of "-Dorg.apache.sqoop.splitter.allow_text_splitter=true" property passed as a parameter 會求最大值,最小值,,,必須不能是text類型
sqoop import --connect jdbc:oracle:thin:@10.**.**.**:1521/jwy --username ** --password ** --table ZHZY.CLD_TFC_PASS171201171215 --target-dir '/hawq_external/hg_sqoop_kkxx' -m 20 --split-by ZHKRKSJ
-ZHZY.B_XDRY
sqoop import --connect jdbc:oracle:thin:@10.**.**.**:1521/jwy --username ** --password ** --table table_name --target-dir '/hawq_external/hg_xdry_sqoop_test1' -m 1
---ZHZY.V_QS_JJ_KKXX 視圖
sqoop import --connect jdbc:oracle:thin:@10.**.**.**:1521/jwy --username ** --password ** --table ZHZY.V_QS_JJ_KKXX --target-dir '/hawq_external/hg_v_kkxx_sqoop_test' -m 10
三、使用過程中的報錯
1、在Kettle中用自帶作業中的sqoop import
2017/12/15 15:16:23 - Spoon - 正在開始任務...
2017/12/15 15:16:23 - sqoop_kk_zhugandao - 開始執行任務
2017/12/15 15:16:23 - sqoop_kk_zhugandao - 開始項[Sqoop Import]
2017/12/15 15:16:23 - Sqoop Import - 2017/12/15 15:16:23 - fs.default.name is deprecated. Instead, use fs.defaultFS
2017/12/15 15:16:23 - Sqoop Import - 2017/12/15 15:16:23 - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2017/12/15 15:16:23 - Sqoop Import - 2017/12/15 15:16:23 - Running Sqoop version: 1.4.6.2.5.3.0-37
2017/12/15 15:16:23 - Sqoop Import - 2017/12/15 15:16:23 - $SQOOP_CONF_DIR has not been set in the environment. Cannot check for additional configuration.
2017/12/15 15:16:23 - Sqoop Import - 2017/12/15 15:16:23 - Data Connector for Oracle and Hadoop is disabled.
2017/12/15 15:16:23 - Sqoop Import - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : Error running Sqoop
2017/12/15 15:16:23 - Sqoop Import - ERROR (version 6.1.0.1-196, build 1 from 2016-04-07 12.08.49 by buildguy) : java.lang.NoClassDefFoundError: org/apache/avro/LogicalType
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.manager.DefaultManagerFactory.accept(DefaultManagerFactory.java:66)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.ConnFactory.getManager(ConnFactory.java:184)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.tool.BaseSqoopTool.init(BaseSqoopTool.java:282)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.tool.ImportTool.init(ImportTool.java:89)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.tool.ImportTool.run(ImportTool.java:610)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.Sqoop.run(Sqoop.java:147)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.hadoop.util.ToolRunner.run(ToolRunner.java:70)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.Sqoop.runSqoop(Sqoop.java:183)
2017/12/15 15:16:23 - Sqoop Import - at org.apache.sqoop.Sqoop.runTool(Sqoop.java:225)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.common.ClassPathModifyingSqoopShim$1.call(ClassPathModifyingSqoopShim.java:81)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.common.ClassPathModifyingSqoopShim$1.call(ClassPathModifyingSqoopShim.java:1)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.common.ClassPathModifyingSqoopShim.runWithModifiedClassPathProperty(ClassPathModifyingSqoopShim.java:62)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.common.ClassPathModifyingSqoopShim.runTool(ClassPathModifyingSqoopShim.java:75)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.common.delegating.DelegatingSqoopShim.runTool(DelegatingSqoopShim.java:41)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.big.data.impl.shim.sqoop.SqoopServiceImpl.runTool(SqoopServiceImpl.java:62)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry.executeSqoop(AbstractSqoopJobEntry.java:302)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.big.data.kettle.plugins.sqoop.AbstractSqoopJobEntry$1.run(AbstractSqoopJobEntry.java:273)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.Thread.run(Thread.java:745)
2017/12/15 15:16:23 - Sqoop Import - Caused by: java.lang.ClassNotFoundException: org.apache.avro.LogicalType
2017/12/15 15:16:23 - Sqoop Import - at java.net.URLClassLoader.findClass(URLClassLoader.java:381)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.ClassLoader.loadClass(ClassLoader.java:424)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.di.core.plugins.KettleURLClassLoader.loadClassFromParent(KettleURLClassLoader.java:89)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.di.core.plugins.KettleURLClassLoader.loadClass(KettleURLClassLoader.java:108)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.Class.forName0(Native Method)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.Class.forName(Class.java:348)
2017/12/15 15:16:23 - Sqoop Import - at org.pentaho.hadoop.shim.HadoopConfigurationClassLoader.loadClass(HadoopConfigurationClassLoader.java:99)
2017/12/15 15:16:23 - Sqoop Import - at java.lang.ClassLoader.loadClass(ClassLoader.java:357)
2017/12/15 15:16:23 - Sqoop Import - ... 18 more
2017/12/15 15:16:23 - sqoop_kk_zhugandao - 完成作業項[Sqoop Import] (結果=[false])
2017/12/15 15:16:23 - sqoop_kk_zhugandao - 任務執行完畢
2017/12/15 15:16:23 - Spoon - 任務已經結束.
2、
1.原表沒有設置主鍵,出現錯誤提示:
ERROR tool.ImportTool: Error during import: No primary key could be found for table xxx. Please specify one with --split-by or perform a sequential import with '-m 1'
提示說明的很清楚:在表xxx沒有發現主鍵,使用--split-by指定一個column作為拆分字段或者在命令行上添加 ‘-m 1',為什么會出現這樣的錯誤提示,我們需要了解一下Sqoop的並行導入機制:
一般來說,Sqoop會創建4個進程,同時進行數據導入操作
如果要導入表的主鍵為id,並行的數量為4,那么Sqoop首先會執行如下一個查詢:
select max(id) as max, select min(id) as min from table [where 如果指定了where子句];
通過這個查詢,獲取到需要拆分字段(id)的最大值和最小值,假設分別是1和1000。
然后,Sqoop會根據需要並行導入的數量,進行拆分查詢,比如上面的這個例子,並行導入將拆分為如下4條SQL同時執行:
select * from table where 0 <= id < 250;
select * from table where 250 <= id < 500;
select * from table where 500 <= id < 750;
select * from table where 750 <= id < 1000;
注意,這個拆分的字段需要是整數。
如果要導入的表中沒有主鍵,則我們應該手動選取一個合適的拆分字段。
首先查看表中有那些字段,如查看表student:desc student;
表中有id,name兩個字段,那我們就可以選取id作為拆分字段,將表導入hive時在命令中添加 --split-by id,就不會報錯了。
參考:http://www.cnblogs.com/gpcuster/archive/2011/03/01/1968027.html
3.Sqoop Hive exited with status 1
當從mysql向Hive導入數據時,執行:
sqoop import --connect jdbc:mysql://localhost/hive --username hive --password hive --table dept_InnoDB --hive-table dept_InnoDB --hive-import --split-by deptno
出現以下錯誤:
13/06/27 18:35:05 INFO hive.HiveImport: Exception in thread "main" java.lang.NoSuchMethodError: org.apache.thrift.EncodingUtils.setBit(BIZ)B
13/06/27 18:35:10 ERROR tool.ImportTool: Encountered IOException running import job: java.io.IOException: Hive exited with status 1
google之,原來是機器上裝的hive和hbase的版本不兼容造成的,在這里具體的說是hive和habse所使用的thrift版本不一樣。當hive和hbase的jar包都添加到CLASSPATH時,運行Sqoop時只會激活一個版本的thrift,這樣往往導致hive運行出錯。
執行:
locate *thrift*.jar
看到:
果然,hive和hbase引用了不同版本的thrift.
這個問題解決起來也非常簡單,將HBASE_HOME設置為空,讓Sqoop不能加載hbase版本的thrift就OK了。
文檔還待完善。