Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-4880

UI shows rebalance progress for failed-over node

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • feature-backlog
    • 1.7.2
    • UI
    • Security Level: Public
    • None
    • EC2, 38 nodes total: 18 existing 1.7.2 nodes, 20 new 1.8.0 nodes, during a rebalance

    Description

      Node 10 was failed over earlier. In the /pools/default JSON it is marked as "clusterMembership" : "inactiveFailed".

      But in the UI, under Manage >> Server Nodes, it is showing as "UP" (green) and 30% rebalanced. It has stopped at 30%, while the other nodes have progressed on closer to 40% at this point.

      See attached screenshot. Node 10 is the first one shown (the one at 30%).

      I'll upload the full JSON of the /pools/default as a separate private comment.

      This is confusing. The active_num and replica_num values for vbuckets are showing 0. The cluster looks like it's correct and that node truly is out of it. But the UI makes it look like that node is up and part of the cluster and participating in the rebalance. Instead it should be showing as failed over and inactive.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            andreibaranouski Andrei Baranouski
            TimSmith Tim Smith (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            0 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty