DataX案例:讀取Oracle的數據存入HDFS中


讀取Oracle的數據存入HDFS

1)編寫配置文件

[oracle@hadoop102 datax]$ vim job/oracle2hdfs.json

{

    "job": {

        "content": [

            {

                "reader": {

                    "name": "oraclereader",

                    "parameter": {

                        "column": ["*"],

                        "connection": [

                            {

                                "jdbcUrl": ["jdbc:oracle:thin:@hadoop102:1521:orcl"],

                                "table": ["student"]

                            }

                        ],

                        "password": "000000",

                        "username": "jason"

                    }

                },

                "writer": {

                    "name": "hdfswriter",

                    "parameter": {

                        "column": [

                            {

                                "name": "id",

                                "type": "int"

                            },

                            {

                                "name": "name",

                                "type": "string"

                            }

 

                        ],

                        "defaultFS": "hdfs://hadoop102:9000",

                        "fieldDelimiter": "\t",

                        "fileName": "oracle.txt",

                        "fileType": "text",

                        "path": "/",

                        "writeMode": "append"

                    }

                }

            }

        ],

        "setting": {

            "speed": {

                "channel": "1"

            }

        }

    }

}

2)執行

[oracle@hadoop102 datax]$ bin/datax.py job/oracle2hdfs.json

3)查看HDFS結果

 

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM