Details
-
Bug
-
Resolution: Fixed
-
Major
-
1.2.1
-
None
Description
https://github.com/couchbase/couchbase-kafka-connector/issues/11
Apologies if duplicate. Please close on Github if already fixed.
legatoslide commented on Jan 10
Hello,
I am using the following code to retrieve all past mutations from a Couchbase cluster:
CouchbaseKafkaConnector connector = CouchbaseKafkaConnector.create(builder.build());
connector.run(connector.buildState(Direction.TO_CURRENT), RunMode.RESUME);
I have only 1 node in the cluster with a fixed number of keys generated inside and there is no more activity on it. When I run several time the code bove, I do not always get the same number of mutations. About half of the time, I get the expected number. Otherwise, I get a smaller number.
I have had a look into what was happening and in fact I think it comes from the Core IO implementation. The DCPHandler class receives STREAM_REQUEST and STREAM_END. For each STREAM_REQUEST, a new stream is added in the DCPConnection. For each STREAM_END, the stream is removed. And if the number of streams in the DCPConnection reaches 0, it is considered that no more stream is expected. But it seems we can sometimes "locally" reach 0, which notifies the subscriber that it is the end of flow.
I slightly modified the Core IO to count the STREAM_END and only terminate when the expected number has been reached....and I always get the expected number of mutations for the same test!
I do not know the Core IO enough to be sure I did it the right way but at least, I think it confirms what I was suspecting.
Regards,
David
@tk6022
tk6022 commented on Jan 11
@legatoslide What version of the connector (and hence core-io) are you seeing this on?
@legatoslide
legatoslide commented on Jan 11
Hello,
on the very last version: core-io=1.2.3 + connector=Master.
I faced the problem you mentionned in Issue #12 and to better debug, I test with local clones.
Regards,
David