Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
Columnar 1.0.0
-
Columnar Edition 1.0.0 build 2190
-
Untriaged
-
0
-
Unknown
-
Analytics Sprint 46
Description
2024-07-07T17:38:23.407+00:00 FATA CBAS.runtime.DcpUpdateCallback [SA:JID:1.2:TAID:TID:ANID:ODID:1:0:16:0:0] Restarting process to ensure data integrityorg.apache.hyracks.api.exceptions.HyracksDataException: java.lang.IndexOutOfBoundsException at org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) ~[hyracks-api.jar:1.0.0-2190] at org.apache.hyracks.storage.am.btree.impls.DiskBTree.searchDown(DiskBTree.java:144) ~[hyracks-storage-am-btree.jar:1.0.0-2190] at org.apache.hyracks.storage.am.btree.impls.DiskBTree.search(DiskBTree.java:107) ~[hyracks-storage-am-btree.jar:1.0.0-2190] at org.apache.hyracks.storage.am.btree.impls.DiskBTree$DiskBTreeAccessor.search(DiskBTree.java:195) ~[hyracks-storage-am-btree.jar:1.0.0-2190] at org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreePointSearchCursor.doHasNext(LSMBTreePointSearchCursor.java:83) ~[hyracks-storage-am-lsm-btree.jar:1.0.0-2190] at org.apache.hyracks.storage.common.EnforcedIndexCursor.hasNext(EnforcedIndexCursor.java:69) ~[hyracks-storage-common.jar:1.0.0-2190] at org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTreeSearchCursor.doHasNext(LSMBTreeSearchCursor.java:67) ~[hyracks-storage-am-lsm-btree.jar:1.0.0-2190] at org.apache.hyracks.storage.common.EnforcedIndexCursor.hasNext(EnforcedIndexCursor.java:69) ~[hyracks-storage-common.jar:1.0.0-2190] at org.apache.asterix.runtime.operators.LSMPrimaryUpsertOperatorNodePushable$1.process(LSMPrimaryUpsertOperatorNodePushable.java:219) [asterix-runtime.jar:1.0.0-2190] at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.processFrame(LSMHarness.java:877) ~[hyracks-storage-am-lsm-common.jar:1.0.0-2190] at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.batchOperate(LSMHarness.java:724) [hyracks-storage-am-lsm-common.jar:1.0.0-2190] at org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.batchOperate(LSMTreeIndexAccessor.java:215) [hyracks-storage-am-lsm-common.jar:1.0.0-2190] at org.apache.asterix.runtime.operators.LSMPrimaryUpsertOperatorNodePushable.nextFrame(LSMPrimaryUpsertOperatorNodePushable.java:440) [asterix-runtime.jar:1.0.0-2190] at org.apache.asterix.external.feed.dataflow.SyncFeedRuntimeInputHandler.nextFrame(SyncFeedRuntimeInputHandler.java:46) [asterix-external-data.jar:1.0.0-2190] at org.apache.asterix.external.operators.FeedMetaStoreNodePushable.nextFrame(FeedMetaStoreNodePushable.java:170) [asterix-external-data.jar:1.0.0-2190] at org.apache.hyracks.dataflow.common.comm.io.AbstractFrameAppender.write(AbstractFrameAppender.java:94) [hyracks-dataflow-common.jar:1.0.0-2190] at com.couchbase.analytics.runtime.DcpRouteOperatorDescriptor$DcpRouteOperatorNodePushable$1.flush(DcpRouteOperatorDescriptor.java:162) [columnar-connector.jar:1.0.0-2190] at com.couchbase.analytics.runtime.DcpRouteOperatorDescriptor$DcpRouteOperatorNodePushable.nextFrame(DcpRouteOperatorDescriptor.java:200) [columnar-connector.jar:1.0.0-2190] at org.apache.hyracks.control.nc.Task.pushFrames(Task.java:429) [hyracks-control-nc.jar:1.0.0-2190] at org.apache.hyracks.control.nc.Task.run(Task.java:362) [hyracks-control-nc.jar:1.0.0-2190] at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?]
|
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?]
|
at java.base/java.lang.Thread.run(Thread.java:840) [?:?]
|
Caused by: java.lang.IndexOutOfBoundsException
|
at java.base/java.nio.Buffer.checkIndex(Buffer.java:743) ~[?:?]
|
at java.base/java.nio.HeapByteBuffer.get(HeapByteBuffer.java:169) ~[?:?]
|
at org.apache.hyracks.storage.am.common.frames.TreeIndexNSMFrame.isLeaf(TreeIndexNSMFrame.java:79) ~[hyracks-storage-am-common.jar:1.0.0-2190]
|
at org.apache.hyracks.storage.am.btree.impls.DiskBTree.searchDown(DiskBTree.java:124) ~[hyracks-storage-am-btree.jar:1.0.0-2190]
|
... 21 more
|
Columnar has approx. 40+TB data. This is seen while scaling down and 4 to 2 nodes + data ingestion + query workload is running.