Two out of sixteen nodes are ejecting active items because their mem_used is above the high water mark. The other nodes are well below. Customer says that keys are of various sizes, but the larger ones should be spread out randomly across the different nodes. Number of keys on all nodes is roughly equal.
The two problem nodes show ep_value_size much larger than a healthy node. However, looking at the sqlite data files, there's no significant difference in size of the files on disk (as seen, for example, in */membase.log).
FYI, the rise in data size seems to have started on these two nodes after a different node, 10.254.7.150, stopped responding to REST and membase was restarted (with 'service membase-server restart').
The mbcollect_info data for these servers are in the S3 . The logs are named:
membase 16: a good node, for comparison
membase 07 and membase 14: the trouble nodes that are ejecting items due to large memory usage
membase 11: the node that was restarted on Saturday
Can someone please take a look at this, and help me understand why the ep_value_size might be bloating up for these two nodes?
|For Gerrit Dashboard: &For+MB-4461=message:MB-4461|
|11293,1||MB-4461 Don't use a reference counter in a checkpoint.||ep-engine||Status: MERGED||+2||+1|
|11376,1||[Backport] MB-4461 Don't use a reference counter in a checkpoint||ep-engine||Status: MERGED||+2||+1|
|11401,1||Merge branch 'branch-17' into branch-18||ep-engine||Status: MERGED||+2||+1|
|11488,1||Merge branch 'branch-18' into branch-20||ep-engine||Status: MERGED||+2||+1|
|11811,2||MB-4461 collapse multiple closed checkpoints into one checkpoint.||ep-engine||Status: MERGED||+2||+1|
|11824,1||Merge branch 'branch-17' into branch-18||ep-engine||Status: MERGED||+2||+1|
|11883,2||Merge branch 'branch-18' into branch-20||ep-engine||Status: MERGED||+2||+1|