Details
-
Bug
-
Resolution: Fixed
-
Major
-
3.1.2
-
None
-
None
-
1
Description
CB Server: 7.0.0 build 4342
Java Client: 3.1.2
I am trying to upsert a large number of documents(1000).
Code works fine if doc content is simple
eg: String CONTENT_NAME= "content";String CONTENT_NAME= "content"; String DEFAULT_CONTENT_VALUE= "default";
JsonObject initial = JsonObject.create().put(CONTENT_NAME, DEFAULT_CONTENT_VALUE);
However if the doc content is huge, I am getting the below error:
ERROR 2021-02-11 15:46:43,742 [cb-timer-1-1] reactor.Flux.FlatMap.1 - ERROR 2021-02-11 15:46:43,742 [cb-timer-1-1] reactor.Flux.FlatMap.1 - com.couchbase.client.core.error.AmbiguousTimeoutException: UpsertRequest, Reason: TIMEOUT {"cancelled":true,"completed":true,"coreId":"0x809b35d600000001","idempotent":false,"lastChannelId":"809B35D600000001/00000000ECF59760","lastDispatchedFrom":"10.100.255.208:51925","lastDispatchedTo":"172.23.111.128:11210","reason":"TIMEOUT","requestId":214,"requestType":"UpsertRequest","retried":0,"service":
{"bucket":"default","collection":"wiki-last","documentId":"wiki203","opaque":"0xf4","scope":"scope1","type":"kv"*},"timeoutMs":2500,"timings":*{"dispatchMicros":2509028,"encodingMicros":218,"totalMicros":2507174,"serverMicros":0}} at com.couchbase.client.core.msg.BaseRequest.cancel(BaseRequest.java:167) at com.couchbase.client.core.Timer.lambda$register$2(Timer.java:157) at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelTimeout.expire(HashedWheelTimer.java:672) at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$HashedWheelBucket.expireTimeouts(HashedWheelTimer.java:747) at com.couchbase.client.core.deps.io.netty.util.HashedWheelTimer$Worker.run(HashedWheelTimer.java:472) at com.couchbase.client.core.deps.io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) at java.lang.Thread.run(Thread.java:748)ERROR 2021-02-11 15:46:43,744 [ForkJoinPool-1-worker
I have increased the connectTimeout expection to 5minutes still getting the same error. Not sure wht the SDK is still picking up 2500MS as timeout
TimeoutConfig.Builder tc = ClusterEnvironment.builder()
.timeoutConfig().connectTimeout(Duration.ofMinutes(5));
environment = ClusterEnvironment.builder().timeoutConfig(tc)
.build();
cluster = Cluster.connect(clusterName,
ClusterOptions.clusterOptions(username, password).environment(environment));
Actual Code:
ReactiveCollection rcollection = collection.reactive();
List<MutationResult> results = docsToUpsert.publishOn(Schedulers.elastic())
.flatMap(key -> rcollection.upsert(key, getObject(key, docTemplate, elasticMap),
upsertOptions().expiry(Duration.ofSeconds(ds.get_expiry()))))
.log()
.buffer(1000)
// Num retries, first backoff, max backoff
// Block until last value, complete or timeout expiry
.blockLast(Duration.ofMinutes(10));
I have tried increasing the buffer size and time interval for blockLast still getting the same error