Description
Build
4.5.0-2440
While verifying some bugs manually, I ran into this case where a rebalance in (of .175 to .139) failed while trying to cleanup old buckets (from UI log). However UI shows .175 already a part of the cluster.
Rebalance exited with reason {buckets_cleanup_failed,['ns_1@172.23.105.175']}
|
ns_orchestrator 002 ns_1@172.23.106.139 3:10:39 PM Wed May 4, 2016
|
|
Failed to cleanup old buckets on node 'ns_1@172.23.105.175': {error,eacces} ns_rebalancer 000 ns_1@172.23.106.139 3:10:39 PM Wed May 4, 2016
|
|
Starting rebalance, KeepNodes = ['ns_1@172.23.105.175','ns_1@172.23.106.139'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes
|
ns_orchestrator 004 ns_1@172.23.106.139 3:10:39 PM Wed May 4, 2016
|
Attcahing cbcollect info from .139 and .175.