There is data loss in couchbase 2.0 when using the set command and the couchbase bucket . Loss seem to be severe the longer away the servers are from the client. Same java client works well with memcached buckets in 2.0, and both couchbase and memcached buckets in 1.8.1. See screenshots below. Note the item count in the couchbase bucket which is missing 24% of the data.
Attached Image for the total items stored in the couchbase bucket. Only 750K items stored for 1M inserts.
On bulk loads using the 1.1.1 library, the customer is seeing data loss for the items that have been set.
The customer tried to set 1M items using the latest Java Client 1.1.1 and figured out that all the items are getting persisted.
An update from the customer.......
I have rewritten it a bit and reproduced the problem here. Find the updated version enclosed where you can see the issue being reproduced. You will see that the number of keys reported by couchbase is not the number of keys that we have inserted.
It seems that the problem is in the handling of queueing the set calls internally in the driver. I.e, if we don't actively force the "async" queues to flush (by calling the future get()), data on the queues could be discarded. So this sounds like a spymemcached bug where it does not correctly flush the queues during high loads? According to the javadoc we should have seen the below, and if not, we should have assumed that all operations were properly processed?
java.lang.IllegalStateException - in the rare circumstance where queue is too full to accept any more requests
Attache code using which we were able to reproduce this error on bulk loads.