Details
-
Bug
-
Resolution: Fixed
-
Critical
-
5.5.0
-
Untriaged
-
Unknown
Description
We can get into a situation whereby we where-by we have evicted everything from a bucket, yet still have a memory usage above the high watermark. This is caused by checkpoints being kept open by DCP consumers who are able to stream data faster than it is being inserted into the cluster - this is usually caused when loading data though XDCR, Backup Manager or Transfer.
The situation is particularly potent when we enter a live-lock like state: we have checkpoints occupying large quantities of the bucket quota, but we are not able to drop the cursors as we do not hit the 95% memory used threshold. The problem here is made worse by the fact that we are never able to get up-to that level to drop memory usage back down again, as once the cluster has hit the high watermark, it stops accepting writes so memory usage can never increase for the bucket.
A slightly fuller analysis of this issue is noted in the comments of MB-27933, this issue is created to track the underlying issue as opposed to once specific case of it.