Details
-
Bug
-
Resolution: Fixed
-
Critical
-
Columnar 1.0.0
-
1.0.0-2237
-
Untriaged
-
0
-
Unknown
-
Analytics Sprint 47
Description
Workload -
Type | Number of collections | Number of items in millions | Total count in millions | |
---|---|---|---|---|
Remote | 80 | 75 | 6000 | |
Standalone | 50 | 8 | 4000* | |
Kafka | 30 | 33.5 | ~1000 |
*Some standalone collections have 8 mil and some have multiples of 8 million items. The total doc count is 4000 million ( 4 billion) items.
Number of links = 6 ( 2 remote + 2 external + 2 kafka). 1 remote link and 1 kafka link is active.
During scaling from 8 to 16 nodes, S3 rate limiting occurred. (as seen on 006)
2024-07-25T09:22:24.239+00:00 FATA CBAS.util.ExitUtil [Executor-3876:9ae47fcc98441d1b9c4d6bd3c9df998e] JVM halting with status 88 (halting thread Thread[Executor-3876:9ae47fcc98441d1b9c4d6bd3c9df998e,5,main], interrupted false) |
2024-07-25T09:22:24.742+00:00 FATA CBAS.util.ExitUtil [pool-2-thread-1] Thread dump at halt: |
"main" [tid=1 state=WAITING lock=java.util.concurrent.Semaphore$NonfairSync@58c8b3ac] |
at java.base@17.0.11/jdk.internal.misc.Unsafe.park(Native Method) |
at java.base@17.0.11/java.util.concurrent.locks.LockSupport.park(LockSupport.java:211) |
at java.base@17.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquire(AbstractQueuedSynchronizer.java:715) |
at java.base@17.0.11/java.util.concurrent.locks.AbstractQueuedSynchronizer.acquireSharedInterruptibly(AbstractQueuedSynchronizer.java:1047) |
at java.base@17.0.11/java.util.concurrent.Semaphore.acquire(Semaphore.java:318) |
at app//com.couchbase.analytics.control.AnalyticsDriver.main(AnalyticsDriver.java:109) |
at app//com.couchbase.columnar.ColumnarDriver.main(ColumnarDriver.java:10) |
|
This might have caused a failed rebalance.
{"stageInfo":{"analytics":{"totalProgress":1.693999999999941e-11,"perNodeProgress":{"ns_1@svc-da-node-016.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-015.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-014.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-013.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-012.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-011.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-009.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-008.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-007.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-006.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-004.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-003.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13,"ns_1@svc-da-node-001.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1.693999999999942e-13},"startTime":"2024-07-25T08:53:44.924Z","completedTime":false,"timeTaken":1733069},"data":{"totalProgress":100,"perNodeProgress":{"ns_1@svc-da-node-016.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-015.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-014.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-013.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-012.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-011.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-009.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-008.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-007.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-006.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-004.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-003.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1,"ns_1@svc-da-node-001.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com":1},"startTime":"2024-07-25T08:53:44.704Z","completedTime":"2024-07-25T08:53:44.923Z","timeTaken":219}},"rebalanceId":"ccce05b490007d029ed452c7eca4781d","nodesInfo":{"active_nodes":["ns_1@svc-da-node-001.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-003.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-004.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-006.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-007.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-008.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-009.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-011.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-012.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-013.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-014.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-015.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-016.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com"],"keep_nodes":["ns_1@svc-da-node-001.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-003.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-004.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-006.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-007.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-008.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-009.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-011.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-012.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-013.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-014.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-015.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","ns_1@svc-da-node-016.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com"],"eject_nodes":[],"delta_nodes":[],"failed_nodes":[]},"masterNode":"ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com","startTime":"2024-07-25T08:53:44.684Z","completedTime":"2024-07-25T09:22:37.992Z","timeTaken":1733308,"completionMessage":"Rebalance exited with reason {service_rebalance_failed,cbas,\n {worker_died,\n {'EXIT',<0.12543.117>,\n {task_failed,rebalance,\n {service_error,\n <<\"Rebalance aca272219b030ccdc84a7a6158febe77 failed: see analytics_info.log for details\">>}}}}}."} |
|
But, there are other errors as well.
Some of them -
Status 103 (as seen on 006)
2024-07-25T09:39:21.596+00:00 FATA CBAS.util.ExitUtil [SA:JID:0.6135:TAID:TID:ANID:ODID:1:0:37:0:0] JVM halting with status 103 (halting thread Thread[SA:JID:0.6135:TAID:TID:ANID:ODID:1:0:37:0:0,5,main], interrupted true) |
2024-07-25T09:39:21.594+00:00 WARN CBAS.dataflow.FeedRecordDataFlowController [SAO:JID:0.6135:TAID:TID:ANID:ODID:166:0:89:0:(linkKepNyACL/default1)[89]:BO] data flow controller interrupted |
org.apache.hyracks.api.exceptions.HyracksDataException: java.lang.InterruptedException
|
at org.apache.hyracks.api.exceptions.HyracksDataException.create(HyracksDataException.java:49) ~[hyracks-api.jar:1.0.0-2237] |
at org.apache.hyracks.comm.channels.NetworkOutputChannel.nextFrame(NetworkOutputChannel.java:91) ~[hyracks-comm.jar:1.0.0-2237] |
at org.apache.hyracks.control.nc.partitions.PipelinedPartition.nextFrame(PipelinedPartition.java:82) ~[hyracks-control-nc.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.ProgressFrameTupleAppender.forward(ProgressFrameTupleAppender.java:188) ~[columnar-connector.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.ProgressFrameTupleAppender.write(ProgressFrameTupleAppender.java:166) ~[columnar-connector.jar:1.0.0-2237] |
at org.apache.hyracks.dataflow.common.comm.util.FrameUtils.appendToWriter(FrameUtils.java:159) ~[hyracks-dataflow-common.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.ProgressPartitionDataWriter.appendMutation(ProgressPartitionDataWriter.java:225) ~[columnar-connector.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.ProgressPartitionDataWriter.doNextFrame(ProgressPartitionDataWriter.java:185) ~[columnar-connector.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.ProgressPartitionDataWriter.nextFrame(ProgressPartitionDataWriter.java:137) ~[columnar-connector.jar:1.0.0-2237] |
at org.apache.hyracks.dataflow.common.comm.util.FrameUtils.flushFrame(FrameUtils.java:50) ~[hyracks-dataflow-common.jar:1.0.0-2237] |
at org.apache.hyracks.dataflow.std.base.AbstractReplicateOperatorDescriptor$ReplicatorMaterializerActivityNode$1.nextFrame(AbstractReplicateOperatorDescriptor.java:143) ~[hyracks-dataflow-std.jar:1.0.0-2237] |
at org.apache.hyracks.dataflow.common.comm.io.AbstractFrameAppender.write(AbstractFrameAppender.java:94) ~[hyracks-dataflow-common.jar:1.0.0-2237] |
at org.apache.asterix.external.util.DataflowUtils.addTupleToFrame(DataflowUtils.java:37) ~[asterix-external-data.jar:1.0.0-2237] |
at org.apache.asterix.external.dataflow.TupleForwarder.addTuple(TupleForwarder.java:43) ~[asterix-external-data.jar:1.0.0-2237] |
at org.apache.asterix.external.dataflow.FeedRecordDataFlowController.parseAndForward(FeedRecordDataFlowController.java:201) ~[asterix-external-data.jar:1.0.0-2237] |
at org.apache.asterix.external.dataflow.FeedRecordDataFlowController.start(FeedRecordDataFlowController.java:96) ~[asterix-external-data.jar:1.0.0-2237] |
at org.apache.asterix.external.dataset.adapter.FeedAdapter.start(FeedAdapter.java:41) ~[asterix-external-data.jar:1.0.0-2237] |
at org.apache.asterix.common.external.IDataSourceAdapter.start(IDataSourceAdapter.java:75) ~[asterix-common.jar:1.0.0-2237] |
at com.couchbase.analytics.runtime.BucketOperatorNodePushable.start(BucketOperatorNodePushable.java:50) ~[columnar-connector.jar:1.0.0-2237] |
at org.apache.asterix.active.ActiveSourceOperatorNodePushable.initialize(ActiveSourceOperatorNodePushable.java:101) ~[asterix-active.jar:1.0.0-2237] |
at org.apache.hyracks.api.rewriter.runtime.SuperActivityOperatorNodePushable.lambda$runInParallel$0(SuperActivityOperatorNodePushable.java:233) ~[hyracks-api.jar:1.0.0-2237] |
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) [?:?] |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) [?:?] |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) [?:?] |
at java.base/java.lang.Thread.run(Thread.java:840) [?:?] |
Caused by: java.lang.InterruptedException
|
at java.base/java.lang.Object.wait(Native Method) ~[?:?]
|
at java.base/java.lang.Object.wait(Object.java:338) ~[?:?] |
at org.apache.hyracks.comm.channels.NetworkOutputChannel.nextFrame(NetworkOutputChannel.java:85) ~[hyracks-comm.jar:1.0.0-2237] |
... 23 more |
Failed rebalance as seen from the ns_server.info.log
[user:warn,2024-07-25T09:30:29.926Z,ns_1@svc-da-node-002.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:<0.17963.123>:analytics:unknown:-1]Analytics Service unable to successfully rebalance 2337a302ac748eb768d3393722c12422 due to 'java.lang.IllegalStateException: timed out waiting for all nodes to join & cluster active (missing nodes: [svc-da-node-015.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (b9eff0b0750fdc23c5f4ad7e6a8b6aef), svc-da-node-009.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (19f33830d7745d8f75324f348aa99f5d), svc-da-node-012.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (fc5397527639ba5199b0e8e3073ca922), svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (b4d9deda3164edb66c5db01bd0150a75), svc-da-node-011.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (b1399647049518345864f818b7d54bec), svc-da-node-013.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (e5dff117b264248c22f9c1dee40dca68), svc-da-node-014.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8091 (103b5e4ebe2157312e82ced7dc8a9781)], state: UNUSABLE)'; see analytics_info.log for details |
IllegalStateException on 010
ERROR StatusConsoleListener Attempted to append to non-started appender AsyncDcpDebugLog
|
ERROR StatusConsoleListener An exception occurred processing Appender AsyncDcpDebugLog
|
java.lang.IllegalStateException: AsyncAppender AsyncDcpDebugLog is not active
|
at org.apache.logging.log4j.core.appender.AsyncAppender.append(AsyncAppender.java:162) |
at org.apache.logging.log4j.core.config.AppenderControl.tryCallAppender(AppenderControl.java:160) |
at org.apache.logging.log4j.core.config.AppenderControl.callAppender0(AppenderControl.java:133) |
at org.apache.logging.log4j.core.config.AppenderControl.callAppenderPreventRecursion(AppenderControl.java:124) |
at org.apache.logging.log4j.core.config.AppenderControl.callAppender(AppenderControl.java:88) |
at org.apache.logging.log4j.core.config.LoggerConfig.callAppenders(LoggerConfig.java:705) |
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:663) |
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:639) |
at org.apache.logging.log4j.core.config.LoggerConfig.logParent(LoggerConfig.java:696) |
at org.apache.logging.log4j.core.config.LoggerConfig.processLogEvent(LoggerConfig.java:665) |
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:639) |
at org.apache.logging.log4j.core.config.LoggerConfig.log(LoggerConfig.java:575) |
at org.apache.logging.log4j.core.config.DefaultReliabilityStrategy.log(DefaultReliabilityStrategy.java:73) |
at org.apache.logging.log4j.core.Logger.log(Logger.java:169) |
at org.apache.logging.log4j.spi.AbstractLogger.tryLogMessage(AbstractLogger.java:2933) |
at org.apache.logging.log4j.spi.AbstractLogger.logMessageTrackRecursion(AbstractLogger.java:2886) |
at org.apache.logging.log4j.spi.AbstractLogger.logMessageSafely(AbstractLogger.java:2868) |
at org.apache.logging.log4j.spi.AbstractLogger.logMessage(AbstractLogger.java:2675) |
at org.apache.logging.log4j.spi.AbstractLogger.logIfEnabled(AbstractLogger.java:2442) |
at org.apache.logging.log4j.spi.AbstractLogger.log(AbstractLogger.java:2230) |
at com.couchbase.analytics.metadata.MaintainDcpCallbackFactory$MaintainDcpCallback.afterOperation(MaintainDcpCallbackFactory.java:126) |
at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.doIo(LSMHarness.java:552) |
at org.apache.hyracks.storage.am.lsm.common.impls.LSMHarness.flush(LSMHarness.java:531) |
at org.apache.hyracks.storage.am.lsm.common.impls.LSMTreeIndexAccessor.flush(LSMTreeIndexAccessor.java:123) |
at org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:38) |
at org.apache.hyracks.storage.am.lsm.common.impls.FlushOperation.call(FlushOperation.java:29) |
at java.base/java.util.concurrent.FutureTask.run(FutureTask.java:264) |
at java.base/java.util.concurrent.ThreadPoolExecutor.runWorker(ThreadPoolExecutor.java:1136) |
at java.base/java.util.concurrent.ThreadPoolExecutor$Worker.run(ThreadPoolExecutor.java:635) |
at java.base/java.lang.Thread.run(Thread.java:840) |
2024-07-25T09:22:26.156+00:00 INFO CBAS.cbas updating svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8095[svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9111, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9110, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:18095, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9118, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9112, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9117, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9113, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9115, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9116, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9120] httpService creds on driver |
2024-07-25T09:22:28.672+00:00 WARN CBAS.cbas got error updating svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:8095[svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9111, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9110, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:18095, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9118, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9112, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9117, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9113, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9115, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9116, svc-da-node-005.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9120] httpService creds on driver: Post "https://svc-da-node-010.bvixnrehpcs2dqv6.sandbox.nonprod-project-avengers.com:9110/analytics/internal/cbas/refreshAuth": read tcp 127.0.0.1:46446->127.0.0.1:9110: read: connection reset by peer; will retry |
These could all possibly be related to the rate limiting. But would be good to have a look and make sure there's nothing alarming. Additionally, the rate limiting fix that went in https://issues.couchbase.com/browse/MB-62795 might not have worked as expected. We ran into the rate limiting error around about the same time as MB-62795 (the cluster scales from 8 to 16 nodes).
cbcollect ->