Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-19512

Rebalance-in fails with {error,eacces}, new node still shows up under cluster nodes

    XMLWordPrintable

Details

    • Bug
    • Resolution: Cannot Reproduce
    • Critical
    • 4.5.0
    • 4.5.0
    • ns_server
    • None

    Description

      Build
      4.5.0-2440

      While verifying some bugs manually, I ran into this case where a rebalance in (of .175 to .139) failed while trying to cleanup old buckets (from UI log). However UI shows .175 already a part of the cluster.

      Rebalance exited with reason {buckets_cleanup_failed,['ns_1@172.23.105.175']}
      ns_orchestrator 002	ns_1@172.23.106.139	3:10:39 PM Wed May 4, 2016
       
      Failed to cleanup old buckets on node 'ns_1@172.23.105.175': {error,eacces}	ns_rebalancer 000	ns_1@172.23.106.139	3:10:39 PM Wed May 4, 2016
       
      Starting rebalance, KeepNodes = ['ns_1@172.23.105.175','ns_1@172.23.106.139'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes
      ns_orchestrator 004	ns_1@172.23.106.139	3:10:39 PM Wed May 4, 2016
      

      Attcahing cbcollect info from .139 and .175.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            apiravi Aruna Piravi (Inactive)
            apiravi Aruna Piravi (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty