Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
2.1.0
-
Security Level: Public
-
None
-
Centos 64-bit
Description
- Viber workload on all 4 buckets, greater load on RevAB compared to the rest.
- After about half a day's run time:
- After a point, replication broke for RevAB as that bucket on destination has run out of memory.
- Resident ratios still at 100% for all buckets on source, and on destination 100% resident ratios on AbRegNums and UserInfo, but 0% on RevAB.
- Mem used is at 3G, with higher water mark at about 2.7G, no temp OOMs noticed however.
- cbstats on destination node:
ep_diskqueue_memory: 0
ep_mem_high_wat: 2738041651
ep_mem_low_wat: 2415919104
ep_mem_tracker_enabled: true
ep_meta_data_memory: 549618696
ep_mutation_mem_threshold: 95
ep_warmup_min_memory_threshold: 100
mem_used: 3060163976
vb_active_ht_memory: 50790400
vb_active_itm_memory: 610687928
vb_active_meta_data_memory: 549618696
vb_active_perc_mem_resident: 0
vb_active_queue_memory: 0
vb_pending_ht_memory: 0
vb_pending_itm_memory: 0
vb_pending_meta_data_memory: 0
vb_pending_perc_mem_resident: 0
vb_pending_queue_memory: 0
Attached cbcollect_info for source (10.3.4.27):
https://s3.amazonaws.com/bugdb/MB--/10_3_4_27.zip
, and destination (10.3.4.30):
https://s3.amazonaws.com/bugdb/MB--/10_3_4_30.zip