先上代碼:
table = tablexx.select('*).tablexx.groupBy('x).select('x, xx.count )
tableEnvironment // declare the external system to connect to .connect( new Kafka() .version("0.10") .topic("test-input") .startFromEarliest() .property("zookeeper.connect", "localhost:2181") .property("bootstrap.servers", "localhost:9092") ) // declare a format for this system .withFormat( new Json().deriveSchema() ) // declare the schema of the table .withSchema( new Schema() .field("rowtime", Types.SQL_TIMESTAMP) .rowtime(new Rowtime() .timestampsFromField("timestamp") .watermarksPeriodicBounded(60000) ) .field("user", Types.LONG) .field("message", Types.STRING) ) // specify the update-mode for streaming tables .inUpsertMode() // register as source, sink, or both and under a name .registerTableSource("outputTable");
table.insertInto("outputTable")
直接上報錯信息:
The program finished with the following exception: org.apache.flink.client.program.ProgramInvocationException: The main method caused an error. at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:546) at org.apache.flink.client.program.PackagedProgram.invokeInteractiveModeForExecution(PackagedProgram.java:421) at org.apache.flink.client.program.ClusterClient.run(ClusterClient.java:427) at org.apache.flink.client.cli.CliFrontend.executeProgram(CliFrontend.java:813) at org.apache.flink.client.cli.CliFrontend.runProgram(CliFrontend.java:287) at org.apache.flink.client.cli.CliFrontend.run(CliFrontend.java:213) at org.apache.flink.client.cli.CliFrontend.parseParameters(CliFrontend.java:1050) at org.apache.flink.client.cli.CliFrontend.lambda$main$11(CliFrontend.java:1126) at java.security.AccessController.doPrivileged(Native Method) at javax.security.auth.Subject.doAs(Subject.java:422) at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1836) at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:41) at org.apache.flink.client.cli.CliFrontend.main(CliFrontend.java:1126) Caused by: org.apache.flink.table.api.NoMatchingTableFactoryException: Could not find a suitable table factory for 'org.apache.flink.table.factories.StreamTableSourceFactory' in the classpath. Reason: No context matches. The following factories have been considered: org.apache.flink.table.sources.CsvBatchTableSourceFactory org.apache.flink.table.sources.CsvAppendTableSourceFactory org.apache.flink.table.sinks.CsvBatchTableSinkFactory org.apache.flink.table.sinks.CsvAppendTableSinkFactory at org.apache.flink.table.factories.TableFactoryService$.filterByContext(TableFactoryService.scala:214) at org.apache.flink.table.factories.TableFactoryService$.findInternal(TableFactoryService.scala:130) at org.apache.flink.table.factories.TableFactoryService$.find(TableFactoryService.scala:81) at org.apache.flink.table.factories.TableFactoryUtil$.findAndCreateTableSource(TableFactoryUtil.scala:49) at org.apache.flink.table.descriptors.ConnectTableDescriptor.registerTableSource(ConnectTableDescriptor.scala:46) at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method) at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62) at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43) at java.lang.reflect.Method.invoke(Method.java:498) at org.apache.flink.client.program.PackagedProgram.callMainMethod(PackagedProgram.java:529) ... 12 more
報錯信息是找不到合適的table factory,查詢報錯類TableFactoryService.scala 源碼214行(報錯信息中報錯位置)
/** * Filters for factories with matching context. * * @return all matching factories */ private def filterByContext[T]( factoryClass: Class[T], properties: Map[String, String], foundFactories: Seq[TableFactory], classFactories: Seq[TableFactory]) : Seq[TableFactory] = { val matchingFactories = classFactories.filter { factory => val requestedContext = normalizeContext(factory) val plainContext = mutable.Map[String, String]() plainContext ++= requestedContext // we remove the version for now until we have the first backwards compatibility case // with the version we can provide mappings in case the format changes plainContext.remove(CONNECTOR_PROPERTY_VERSION) plainContext.remove(FORMAT_PROPERTY_VERSION) plainContext.remove(METADATA_PROPERTY_VERSION) plainContext.remove(STATISTICS_PROPERTY_VERSION) // check if required context is met plainContext.forall(e => properties.contains(e._1) && properties(e._1) == e._2) } if (matchingFactories.isEmpty) { throw new NoMatchingTableFactoryException( "No context matches.", factoryClass, foundFactories, properties) } matchingFactories }
主要是對比 requestedContext 中的必需屬性,在 properties 中是否有
requestedContext 必需屬性如下:
connector.type kafka
update-mode append
connector.version,0.10
connector.property-version,1
這些屬性properties中都有,只是“update-mode”,我這里是 "upsert", 將方法 “inUpsertMode()” 改為 “.inAppendMode()”,執行,這個錯就解決了。
(找問題的時候,看到個大哥的,properties 里面沒有 connector.type,不過好像用的是1.7的dev版本)
結論是,遇到這個問題,debug進去,看下到底那個屬性對應不上,然后針對解決。
--------------------------
你以為這樣就解決了嗎,不可能的
新的報錯如下: "AppendStreamTableSink requires than Table has only insert changes",意思是 AppendStreamTableSink 需要表只有插入(不能update),
去掉表上面的groupBy(),就不會報錯了。。。
table = tablexx.select('*).tablexx.groupBy('x).select('x, xx.count )
改為:
table = tablexx.select('*)
是不會報錯,但是,我要group 啊。。。
沒辦法,只有先轉成stream,再輸出了
使用 toRetractStream(), 轉成stream,結果發現,一直想用的flink的撤回功能就在這里了。
group 字段的 count 值變化的時候,會產生兩條數據,一條是舊數據,帶着false標示,一條是新數據,帶着true標示