1,java.lang.ClassNotFoundException Unknown pair
1.Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:
down vote
accepted
Please try to turn on isStoreKeepBinary in cache settings - like this; please note the last line:
if (persistence){
// Configuring Cassandra's persistence
DataSource dataSource = new DataSource();
// ...here go the rest of your settings as they appear now...
configuration.setWriteBehindEnabled(true);
configuration.setStoreKeepBinary(true);
}
This setting forces Ignite to avoid binary deserialization when working with underlying cache store.
2.I can reproduce it when, in loadCaches(), I put something that isn't exactly the expected Item in the cache:
private void loadCache(IgniteCache<Integer, Item> cache, /* Ignite.binary() */ IgniteBinary binary) {
// Note the absence of package name here:
BinaryObjectBuilder builder = binary.builder("Item");
builder.setField("name", "a");
builder.setField("brand", "B");
builder.setField("type", "c");
builder.setField("manufacturer", "D");
builder.setField("description", "e");
builder.setField("itemId", 1);
參考鏈接:
http://apache-ignite-users.70518.x6.nabble.com/ClassNotFoundException-with-affinity-run-td5359.html
https://stackoverflow.com/questions/47502111/apache-ignite-ignitecheckedexception-unknown-pair#
2,java.lang.IndexOutOfBoundsException + Failed to wait for completion of partition map exchange
異常描述:
2018-06-06 14:24:02.932 ERROR 17364 --- [ange-worker-#42] .c.d.d.p.GridDhtPartitionsExchangeFuture : Failed to reinitialize local partitions (preloading will be stopped):
...
java.lang.IndexOutOfBoundsException: index 678
... org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279) [ignite-core-2.3.0.jar:2.3.0]
at org.apache.ignite.internal.util.worker.GridWorker.run(GridWorker.java:110) [ignite-core-2.3.0.jar:2.3.0]
at java.lang.Thread.run(Thread.java:745) [na:1.8.0_73]
2018-06-06 14:24:02.932 INFO 17364 --- [ange-worker-#42] .c.d.d.p.GridDhtPartitionsExchangeFuture : Finish exchange future [startVer=AffinityTopologyVersion [topVer=1, minorTopVer=1], resVer=null, err=java.lang.IndexOutOfBoundsException: index 678]
2018-06-06 14:24:02.941 ERROR 17364 --- [ange-worker-#42] .i.p.c.GridCachePartitionExchangeManager : Failed to wait for completion of partition map exchange (preloading will not start): GridDhtPartitionsExchangeFuture
...
org.apache.ignite.IgniteCheckedException: index 678
at org.apache.ignite.internal.util.IgniteUtils.cast(IgniteUtils.java:7252) ~[ignite-core-2.3.0.jar:2.3.0]
....
org.apache.ignite.internal.processors.cache.GridCachePartitionExchangeManager$ExchangeWorker.body(GridCachePartitionExchangeManager.java:2279) ~[ignite-core-2.3.0.jar:2.3.0]
... 2 common frames omitted
出現這個情況的原因如下:
如果定義的緩存類型是REPLICATED模式,並且開啟了持久化,后面將其改為PARTITIONED模式,並導入數據,后續重啟的時候就會報這個錯誤。
比如下面這種情況:
default-config.xml
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
...
<property name="name" value="Test"/>
<property name="atomicityMode" value="ATOMIC"/>
<property name="cacheMode" value="REPLICATED"/>
...
</bean>
</list>
</property>
ignite.destroyCache("Test");
IgniteCache<Long, CommRate> cache = ignite.getOrCreateCache("Test");
當重新啟動的時候,default-config.xml中的配置先生效,所以會出現這個問題。
解決辦法就是在持久化模式下不要更改緩存模式,或者不要在配置文件中預定義緩存類型。
I can't reproduce your case. But the issue could occur if you had a REPLICATED cache and after some time changed it to PARTITIONED and for example call to getOrCreateCache keeping old cache name.
參考鏈接:
http://apache-ignite-users.70518.x6.nabble.com/Weird-index-out-bound-Exception-td14905.html
3,Failed to find SQL table for type xxxx
導入數據有誤,將該cache destroy掉重新導入.
4, ignite消息機制出現重復消息並且按執行次數遞增
ignite消息機制出現重復消息並且按執行次數遞增的原因是添加了多次監聽器。
針對相同主題的remoteListen和localListen都只應該執行一次,不然每重復執行一次就會多增加一個監聽器,
然后表現出的現象就像是消息按執行次數重復發。
private AtomicBoolean rmtMsgInit = new AtomicBoolean(false);
private AtomicBoolean localMsgInit = new AtomicBoolean(false);
@RequestMapping("/msgTest")
public @ResponseBody
String orderedMsg(HttpServletRequest request, HttpServletResponse response) {
/***************************remote message****************************/
IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());
/**相同的消息監聽只能設置一次,不然會出現接收到重復消息,並且按次數遞增*/
if(!rmtMsgInit.get()) {
rmtMsg.remoteListen("MyOrderdTopic", (nodeId, msg) -> {
System.out.println("Received ordered message [msg=" + msg +", from=" + nodeId + "]");
return true;
});
rmtMsgInit.set(true);
}
rmtMsg.send("MyOrderdTopic", UUID.randomUUID().toString());
// for (int i=0; i < 10; i++) {
// rmtMsg.sendOrdered("MyOrderdTopic", Integer.toString(i), 0);
// rmtMsg.send("MyOrderdTopic", Integer.toString(i));
// }
/***************************local message****************************/
IgniteMessaging localMsg = ignite.message(ignite.cluster().forLocal());
/**相同的消息監聽只能設置一次,不然會出現接收到重復消息,並且按次數遞增*/
if(!localMsgInit.get()){
localMsg.localListen("localTopic", (nodeId, msg) -> {
System.out.println(String.format("Received local message [msg=%s, from=%s]", msg, nodeId));
return true;
});
localMsgInit.set(true);
}
localMsg.send("localTopic", UUID.randomUUID().toString());
return "executed!";
}
5,ignite遠程執行(remote)之類的操作控制台無打印
一般在ignite.cluster().forRemotes()遠程執行相關的操作的時候,程序可能會在其他節點執行,
所以打印的日志和輸出也會在節點上輸出,而程序終端不一定會有輸出。
例如:
IgniteMessaging rmtMsg = ignite.message(ignite.cluster().forRemotes());
rmtMsg.remoteListen("MyOrderdTopic", (nodeId, msg) -> {
System.out.println("Received ordered message [msg=" + msg +", from=" + nodeId + "]");
return true;
});
如果想在程序端看到效果,可以使用本地模式:
IgniteMessaging.localListen
ignite.events().localListen
6,ignite持久化占用磁盤空間過大
wal日志機制
增加如下配置,修改wal日志同步頻率
<!-- Redefining maximum memory size for the cluster node usage. -->
<property name="dataStorageConfiguration">
<bean class="org.apache.ignite.configuration.DataStorageConfiguration">
...
<!--Checkpointing frequency which is a minimal interval when the dirty pages will be written to the Persistent Store.-->
<property name="checkpointFrequency" value="180000"/>
<!-- Number of threads for checkpointing.-->
<property name="checkpointThreads" value="4"/>
<!-- Number of checkpoints to be kept in WAL after checkpoint is finished.-->
<property name="walHistorySize" value="20"/>
...
</bean>
</property>
7,java.lang.ClassCastException org.cord.xxx cannot be cast to org.cord.xxx
java.lang.ClassCastException org.cord.ignite.data.domain.Student cannot be cast to org.cord.ignite.data.domain.Student
在從ignite中查詢緩存的時候出現該異常,明明是相同的類,但是卻無法接收獲取的緩存對象:
IgniteCache<Long, Student> cache = ignite.cache(CacheKeyConstant.STUDENT);
Student student = cache.get(1L);
於是使用instanceof進行分析:
cache.get(1L) instanceof Student
返回false
說明從ignite中返回的對象不是Student的實例,但是debug看類的屬性都是相同的,那么只有一種可能,ignite中查詢出來的對象用的Student和當前接收結果的Student使用的類加載器不同。
於是查看兩者的類加載器:
cache.get(1L).getClass().getClassLoader()
=> AppClassLoader
Student.class.getClassLoader()
=> RestartClassLoader
果然,兩個類的類加載器不同,經過度娘,RestartClassLoader是spring-boot-devtools
熱部署插件使用的類加載器。問題找到了,這樣就好辦了,去掉spring-boot-devtools
的依賴后即可。
8,Ignite持久化情況下使用SqlFieldQuery
查詢數據中文亂碼
普通模式正常,而開啟持久化之后,如果是使用SqlQuery
查詢的結果是對象,數據不亂碼(有反序列化),但是如果是使用SqlFieldQuery
則出現亂碼。持久化是將內存的數據持久化到磁盤,這說明可能跟文件的編碼有關,於是打印一下每個節點的文件編碼:System.getProperty("file.encoding")
,結果發現持久化的節點的編碼為 gb18030
,設置file.encoding=UTF-8
之后,重新導入數據再查詢,不再出現亂碼情況了。
通過設置環境變量 JAVA_TOOL_OPTIONS=-Dfile.encoding=UTF-8
即可
9,[Ignite2.7] java.lang.IllegalAccessError: tried to access field org.h2.util.LocalDateTimeUtils.LOCAL_DATE from class org.apache.ignite.internal.processors.query.h2.H2DatabaseType
這是h2兼容問題導致的,使用最新的h2版本即可
<dependency>
<groupId>org.apache.ignite</groupId>
<artifactId>ignite-indexing</artifactId>
<version>${ignite.version}</version>
<exclusions>
<exclusion>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
</exclusion>
</exclusions>
</dependency>
<dependency>
<groupId>com.h2database</groupId>
<artifactId>h2</artifactId>
<version>1.4.197</version>
</dependency>
10, Failed to serialize object...Failed to write field...Failed to marshal object with optimized marshaller 分布式計算無法傳播到其它節點
具體報錯信息如下:
o.a.i.i.m.d.GridDeploymentLocalStore : Class locally deployed: class org.cord.ignite.controller.ComputeTestController
2018-12-20 21:13:05.398 ERROR 16668 --- [nio-8080-exec-1] o.a.i.internal.binary.BinaryContext : Failed to serialize object [typeName=o.a.i.i.worker.WorkersRegistry]
org.apache.ignite.binary.BinaryObjectException: Failed to write field [name=registeredWorkers] at org.apache.ignite.internal.binary.BinaryFieldAccessor.write(BinaryFieldAccessor.java:164) [ignite-core-2.7.0.jar:2.7.0]
...
Caused by: org.apache.ignite.binary.BinaryObjectException: Failed to marshal object with optimized marshaller: {...}
Caused by: org.apache.ignite.IgniteCheckedException: Failed to serialize object: {...}
Caused by: java.io.IOException: Failed to serialize object [typeName=java.util.concurrent.ConcurrentHashMap]
Caused by: java.io.IOException: java.io.IOException: Failed to serialize object
...
Caused by: java.io.IOException: Failed to serialize object [typeName=java.util.ArrayDeque]
Caused by: java.io.IOException: java.lang.NullPointerException
...
在分布式計算類中如果含有特殊注入的bean的話會導致分布式計算傳播異常,例如下面這樣:
...
@Autowired
private IgniteConfiguration igniteCfg;
String broadcastTest() {
IgniteCompute compute = ignite.compute();
compute.broadcast(() -> System.out.println("Hello Node: " + ignite.cluster().localNode().id()));
return "all executed.";
}
這些bean是無法被傳播的,所以在分布式計算類中 除了ignite實例注入,最好不要隨便注入其它的bean,如果是更復雜的場景可以考慮服務網格;
11,WARNING: Exception during batch send on streamed connection close; java.sql.BatchUpdateException: class org.apache.ignite.IgniteCheckedException: Data streamer has been closed
ignite jdbc在進行批量插入操作的時候,如果重復打開流或者流不是順序模式容易出現這個錯誤。解決辦法:在創建jdbc connection的時候設置打開流;開啟流的時候設置為順序模式: SET STREAMING ON ORDERED
String url = "jdbc:ignite:thin://127.0.0.1/";
String[] sqls = new String[]{};
Properties properties = new Properties();
properties.setProperty(IgniteJdbcDriver.PROP_STREAMING, "true");
properties.setProperty(IgniteJdbcDriver.PROP_STREAMING_ALLOW_OVERWRITE, "true");
try (Connection conn = DriverManager.getConnection(url, properties)){
Statement statement = conn.createStatement();
for (String sql : sqls) {
statement.addBatch(sql);
}
statement.executeBatch();
}
參考鏈接:https://issues.apache.org/jira/browse/IGNITE-10991
http://apache-ignite-users.70518.x6.nabble.com/Data-streamer-has-been-closed-td26521.html
12,java.lang.IllegalArgumentException: Ouch! Argument is invalid: timeout cannot be negative: -2
如果超時參數設置的太大導致溢出,則啟動會拋出這個異常。例如像下面這樣設置:
igniteCfg.setFailureDetectionTimeout(Integer.MAX_VALUE);
igniteCfg.setNetworkTimeout(Long.MAX_VALUE);
13,ddl創建的表怎么進行集群分組
with語句中有個TEMPLATE
參數,它既可以簡單的指定復制(REPLICATED)和分區(PARTITIONED,也可以指定CacheConfiguration
的實例,所以可以將ddl與xml中的cache進行關聯即可進行集群分組。但是CacheConfiguration
如果添加配置默認會創建一個cache,這時候可以通過在cache name后面加一個*
號,這樣就不會創建對應的cache,這時候ddl就可以與該配置進行關聯,示例:
<property name="cacheConfiguration">
<list>
<bean class="org.apache.ignite.configuration.CacheConfiguration">
<property name="name" value="student*"/>
<property name="cacheMode" value="REPLICATED"/>
<property name="nodeFilter"> <!--配置節點過濾器-->
<bean class="org.cord.ignite.initial.DataNodeFilter"/>
</property>
</bean>
</list>
</property>
CREATE TABLE IF NOT EXISTS PUBLIC.STUDENT (
STUDID INTEGER,
NAME VARCHAR,
EMAIL VARCHAR,
dob Date,
PRIMARY KEY (STUDID, NAME))
WITH "template=student,atomicity=ATOMIC,cache_name=student";
14, Failed to communicate with Ignite cluster
瘦客戶端(IgniteJdbcThinDriver
)並不是線程安全的,如果要使用瘦客戶端並發執行sql查詢,則需要為每個線程各自創建Connection
15,dbeaver關聯查詢有部分數據關聯不到
dbeaver是瘦客戶端,如果關聯的緩存的模式有是分區模式的,則關聯查詢需要開啟分布式關聯,開啟方式為在連接url中添加distributedJoins=true
的配置,示例:
jdbc:ignite:thin://127.0.0.1:10800;distributedJoins=true
16,WARN [H2TreeIndex]
Indexed columns of a row cannot be fully inlined into index what may lead to slowdown due to additional data page reads, increase index inline size if needed
主鍵的inlineSize怎么指定?
H2TreeIndex.computeInlineSize(List<InlineIndexHelper> inlineIdxs, int cfgInlineSize)
《|》
int confSize = cctx.config().getSqlIndexMaxInlineSize()
private int sqlIdxMaxInlineSize = DFLT_SQL_INDEX_MAX_INLINE_SIZE = -1;
IGNITE_MAX_INDEX_PAYLOAD_SIZE_DEFAULT=10
也就是說如果建索引的時候不指定inlinesize的話默認就是10;
recommendedInlineSize計算規則:
H2Tree.inlineSizeRecomendation(SearchRow row)
InlineIndexHelper.inlineSizeOf(Value val)
InlineIndexHelper.InlineIndexHelper(String colName, int type, int colIdx, int sortType, CompareMode compareMode)
通過python計算inlineSize:
import os
import cx_Oracle as oracle
os.environ["NLS_LANG"] = ".UTF8"
db = oracle.connect('cord/123456@(DESCRIPTION=(ADDRESS_LIST=(ADDRESS=(PROTOCOL=TCP)(HOST=127.0.0.1)(PORT=1520)))(CONNECT_DATA=(SID=orcl)))')
cursor = db.cursor()
query_index_name="select index_name from ALL_INDEXES where table_name='%s' and index_type='NORMAL' and uniqueness='NONUNIQUE'"
query_index_column="select column_name from all_ind_columns where table_name='%s' and index_name='%s'"
query_index_column_type="select data_type,data_length from all_tab_columns where table_name='%s' and column_name='%s'"
def inlineSizeOf(data_type, data_length):
if data_type == 'VARCHAR2':
return data_length + 3
if data_type == 'DATE':
return 16+1
if data_type == 'NUMBER':
return 8+1
return -1
def computeInlineSize(tableName):
table=tableName.upper()
retmap = {}
###查詢索引名
ret = cursor.execute(query_index_name % table).fetchall()
if len(ret) == 0:
print("table[%s] not find any normal index" % table)
return
###根據索引名獲取索引字段名
for indexNames in ret:
# print(indexNames[0])
indexName = indexNames[0]
result = cursor.execute(query_index_column % (table, indexName)).fetchall()
if len(result) == 0:
print("table[%s] index[%s] not find any column" % (table, indexName))
continue
inlineSize=0
###根據字段獲取字段類型並計算inlineSze
for columns in result:
column=columns[0]
type_ret = cursor.execute(query_index_column_type % (table, column)).fetchall()
if len(result) == 0:
print("table[%s] index[%s] column[%s] not find any info" % (table, indexName, column))
continue
data_type = type_ret[0][0]
data_length = type_ret[0][1]
temp = inlineSizeOf(data_type, data_length)
if temp == -1:
print("table[%s] index[%s] column[%s] type[%s] unknown" % (table, indexName, column, data_type))
inlineSize += inlineSizeOf(data_type, data_length)
retmap[indexName] = inlineSize
print(retmap)
if __name__ == '__main__':
computeInlineSize('PERSON')