Details
-
Bug
-
Resolution: Fixed
-
Major
-
1.7.2
-
Security Level: Public
-
None
-
EC2, 38 nodes total: 18 existing 1.7.2 nodes, 20 new 1.8.0 nodes, during a rebalance
Description
Node 10 was failed over earlier. In the /pools/default JSON it is marked as "clusterMembership" : "inactiveFailed".
But in the UI, under Manage >> Server Nodes, it is showing as "UP" (green) and 30% rebalanced. It has stopped at 30%, while the other nodes have progressed on closer to 40% at this point.
See attached screenshot. Node 10 is the first one shown (the one at 30%).
I'll upload the full JSON of the /pools/default as a separate private comment.
This is confusing. The active_num and replica_num values for vbuckets are showing 0. The cluster looks like it's correct and that node truly is out of it. But the UI makes it look like that node is up and part of the cluster and participating in the rebalance. Instead it should be showing as failed over and inactive.