postgresql批量新增數據時,批量插入的數據量太大了,造成了IO異常
只能把這么多數據先進行分割,再批量插入,但是感覺這種方式不是最優解,先暫時頂着,具體List分割方式如下:
package cn.ucmed.otaka.healthcare.cloud.util; import java.util.HashMap; import java.util.List; import java.util.Map; public class PartitionArray<T> { public Map<Integer, List<T>> partition(List<T> tArray, int capacity) { if (tArray.isEmpty() || capacity < 1) { return null; } Map<Integer, List<T>> result = new HashMap<>(); int size = tArray.size(); int count = ((Double) Math.ceil(size * 1.0 / capacity)).intValue(); for (int i = 0; i < count; i++) { int end = capacity * (i + 1); if (end > size) end = size; result.put(i, tArray.subList(capacity * i, end)); } return result; } }
原先批量調用的地方做個處理:
try { PartitionArray<MDynamicFuncReleaseHistory> partitionArray = new PartitionArray<>(); Map<Integer, List<MDynamicFuncReleaseHistory>> batchList = partitionArray.partition(releaseHistoryList, INSERT_CAPACITY); batchList.forEach((k, v) -> { mDynamicFuncReleaseHistoryMapper.batchInsert(v); }); } catch (Exception e) { log.error(e.getMessage()); throw new BusinessException(500, "新增MDynamicFuncReleaseHistory失敗"); }