Details
-
Bug
-
Resolution: Fixed
-
Critical
-
6.5.0
-
Untriaged
-
Unknown
-
CX Sprint 194
Description
When a page that contains a large document is encountered by the compression look-aside file writer (LAFWriter), and that large document spans multiple pages, the LAFWriter will not allocate the required entry pages and that will result in flush/merge operations failing with the following exception and subsequently the JVM halting:
java.lang.IllegalStateException: Unprepared compressed-write for page ID: 2java.lang.IllegalStateException: Unprepared compressed-write for page ID: 2 at org.apache.hyracks.storage.common.compression.file.LAFWriter.getPageBuffer(LAFWriter.java:188) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.compression.file.LAFWriter.writePageInfo(LAFWriter.java:164) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.compression.file.CompressedFileManager.writeExtraPageInfo(CompressedFileManager.java:218) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.file.CompressedBufferedFileHandle.writeExtraCompressedPages(CompressedBufferedFileHandle.java:148) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.file.CompressedBufferedFileHandle.write(CompressedBufferedFileHandle.java:124) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.buffercache.AbstractBufferedFileIOManager.write(AbstractBufferedFileIOManager.java:85) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.buffercache.BufferCache.write(BufferCache.java:570) ~[hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.common.buffercache.FIFOLocalWriter.write(FIFOLocalWriter.java:43) [hyracks-storage-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.common.impls.AbstractTreeIndex$AbstractTreeIndexBulkLoader.write(AbstractTreeIndex.java:330) [hyracks-storage-am-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.btree.impls.BTree$BTreeBulkLoader.add(BTree.java:1053) [hyracks-storage-am-btree.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.LSMIndexBulkLoader.add(LSMIndexBulkLoader.java:55) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.ChainedLSMDiskComponentBulkLoader.add(ChainedLSMDiskComponentBulkLoader.java:68) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.btree.impls.LSMBTree.doMerge(LSMBTree.java:352) [hyracks-storage-am-lsm-btree.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.AbstractLSMIndex.merge(AbstractLSMIndex.java:867) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.doIo(LSMHarness.java:534) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.merge(LSMHarness.java:573) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.merge(LSMTreeIndexAccessor.java:127) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:52) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at org.apache.hyracks.storage.am.lsm.common.impls.MergeOperation.call(MergeOperation.java:33) [hyracks-storage-am-lsm-common.jar:6.5.0-4960] at java.util.concurrent.FutureTask.run(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor.runWorker(Unknown Source) [?:?] at java.util.concurrent.ThreadPoolExecutor$Worker.run(Unknown Source) [?:?] at java.lang.Thread.run(Unknown Source) [?:?] |
The issue was encountered and might make the service unresponsive. The only current workaround is creating datasets without compression which might require removing the service completely and re-adding it if it is unresponsive.
Attachments
Issue Links
- links to