Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
1.8.0
-
Security Level: Public
Description
One of customers had master node fail in the middle of rebalance. As a result rebalance was actually aborted, but ns_config flag that marks rebalance as running was still there.
What's most notable is that we're not allowing many actions in UI while rebalancing. So UI was incorrectly thinking that rebalance is running and not allowing that broken node to be failed over.
Stop rebalance wasn't actually working as well because rebalance wasn't really running.
Customer had to manually reset rebalance state via /diag/eval snippet that sets rebalance_state config variable. I've recommended something like that: ns_config:set(rebalance_status,
{node, <<"stopped by human">>}).
It's notable that 1.8.0 actually have code to clean up stale rebalance status, but it is only triggered when all nodes are healthy, which was not holding in this customer's case.
So decision was to actually clear rebalance status when asked, but to warn user if our orchestrator is clearly not running rebalance because network partition may actually mean that some other network partition still has old orchestrator that tries to run rebalance.