Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-52782

[CLI] [Recovery] Source/sink usernames/passwords used incorrectly

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Major
    • 7.1.2
    • 6.0.0, 6.0.1, 6.0.2, 6.0.3, 6.0.4, 6.0.5, 6.5.1, 6.6.0, 6.6.1, 6.6.2, 6.5.2, 6.5.0, 6.6.3, 6.6.4, 6.6.5, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4
    • tools
    • Untriaged
    • 1
    • No

    Description

      What's the issue?
      The 'cbrecovery' command does not correctly use the source/sink usernames/passwords resulting in it using the wrong credentials to communicate with the clusters.

      host, port, user, pwd, path = \
          pump.parse_spec(opts, sink, 8091)
       
      #retrieve a list of missing vbucket
      cmd = "startRecovery"
      url = "/pools/default/buckets/%s/controller/%s" % (self.sink_bucket, cmd)
      err, conn, response = \
          pump.rest_request(host, int(port), user, pwd,
                            url, method='POST', reason='start_recovery')
      

      The important thing in this code snippet is:

      1. We're communicating to the sink cluster
      2. We're using 'opts' to get the username

      def get_username(username):
          return username or os.environ.get('CB_REST_USERNAME', '')
       
      ...
       
      username = get_username(opts.username)
      

      Looking further down, we see the problem; we're using the same username option ('opts.username') when communicating to the source/sink cluster; the same applies to the password.

      Why haven't we noticed until now?
      I suspect during development both clusters had the same username/password.

      What's the fix?
      We should use the correct credentials when communicating with the source/sink clusters.

      Is there a workaround?
      Yes, use the same user on the source/sink cluster. This could be a temporary user if required.

      Steps to reproduce

      1. Create two, two node clusters a and b (with different usernames/passwords)
      2. Create bucket a on cluster a (disable replicas)
      3. Create bucket b on cluster b (disable replicas)
      4. Set XDCR from cluster a to cluster b (bucket a -> b)
      5. Load data into bucket a on cluster a (wait for XDCR to catch up)
      6. Hard failover one of the nodes on cluster a (causing half vBuckets to be lost)
      7. Use cbrecovery to recover vBuckets from b, to a

      Attachments

        For Gerrit Dashboard: MB-52782
        # Subject Branch Project Status CR V

        Activity

          People

            gilad.kalchheim Gilad Kalchheim
            james.lee James Lee
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty