Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-5020

Rebalance state incorrectly reported as running even when it's not and user is unable to stop it or fail over/add nodes

    XMLWordPrintable

Details

    Description

      One of customers had master node fail in the middle of rebalance. As a result rebalance was actually aborted, but ns_config flag that marks rebalance as running was still there.

      What's most notable is that we're not allowing many actions in UI while rebalancing. So UI was incorrectly thinking that rebalance is running and not allowing that broken node to be failed over.

      Stop rebalance wasn't actually working as well because rebalance wasn't really running.

      Customer had to manually reset rebalance state via /diag/eval snippet that sets rebalance_state config variable. I've recommended something like that: ns_config:set(rebalance_status,

      {node, <<"stopped by human">>}

      ).

      It's notable that 1.8.0 actually have code to clean up stale rebalance status, but it is only triggered when all nodes are healthy, which was not holding in this customer's case.

      So decision was to actually clear rebalance status when asked, but to warn user if our orchestrator is clearly not running rebalance because network partition may actually mean that some other network partition still has old orchestrator that tries to run rebalance.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            Aliaksey Artamonau Aliaksey Artamonau (Inactive)
            alkondratenko Aleksey Kondratenko (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                PagerDuty