Flink集成Iceberg


Flink: 1.11.0
Iceberg: 0.11.1
hive: 2.3.8
hadoop: 3.2.2
java: 1.8
scala: 2.11
 

一、下載或編譯iceberg-flink-runtime jar包

下載

wget https://repo.maven.apache.org/maven2/org/apache/iceberg/iceberg-flink-runtime/0.11.1/iceberg-flink-runtime-0.11.1.jar

直接編譯

git clone https://github.com/apache/iceberg.git
./gradlew build -x test

 

二、啟動Hadoop、Flink

export HADOOP_CLASSPATH=`$HADOOP_HOME/bin/hadoop classpath`
${HADOOP_HOME}/sbin/start-all.sh
${FLINK_HOME}/bin/start-cluster.sh

 

三、Flink sql操作

1、啟動客戶端

${FLINK_HOME}/bin/sql-client.sh embedded -j iceberg-flink-runtime-0.11.1.jar -j flink-sql-connector-hive-2.3.6_2.11-1.11.0.jar shell

2、建Catalog

臨時:

create catalog iceberg with('type'='iceberg',
  'catalog-type'='hive',
  'uri'='thrift://rick-82lb:9083',
  'clients'='5',
  'property-verion'='1',
  'warehouse'='hdfs:///user/hive/warehouse');

永久:

catalogs:
  - name: iceberg
    type: iceberg
    warehouse: hdfs:///user/hive2/warehouse
    uri: thrift://rick-82lb:9083
    catalog-type: hive

3、建庫和表

create database iceberg.test;
create table iceberg.test.t20(id bigint);

4、寫入數據

insert into iceberg.test.t20 values (10);
insert into iceberg.test.t20 values (20);

  

t20目錄的情況如下,后面會做具體介紹

t20
├── data
│   ├── 00000-0-9c7ff22e-a767-4b85-91ec-a2771e54c209-00001.parquet
│   └── 00000-0-ecd3f21c-1bc0-4cdc-8917-d9a1afe7ce55-00001.parquet
└── metadata
├── 00000-d864e750-e5e2-4afd-bddb-2fab1e627a21.metadata.json
├── 00001-aabfd9a8-7dcd-4aa0-99aa-f6695f39bf6b.metadata.json
├── 00002-b5b7725f-7e86-454b-8d16-0e142bc84266.metadata.json
├── 0254b8b6-4d76-473c-86c2-97acda68d587-m0.avro
├── f787e035-8f7c-43a3-b264-42057bad2710-m0.avro
├── snap-6190364701448945732-1-0254b8b6-4d76-473c-86c2-97acda68d587.avro
└── snap-6460256963744122971-1-f787e035-8f7c-43a3-b264-42057bad2710.avro

 


免責聲明!

本站轉載的文章為個人學習借鑒使用,本站對版權不負任何法律責任。如果侵犯了您的隱私權益,請聯系本站郵箱yoyou2525@163.com刪除。



 
粵ICP備18138465號   © 2018-2025 CODEPRJ.COM