Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-46669

Bump next UIDs after Quorum failover of non-KV nodes

    XMLWordPrintable

Details

    Description

      A simple way perhaps to verify this:
      1. Create a 2 node cluster - .215(kv) and .217(index)
      2. Create a bucket 'testBucket'
      3. Create a collection 'pre-qf'
      4. Check the manifest

      curl -v -u Administrator:password http://172.23.105.215:8091/pools/default/buckets/testBucket/scopes

      givesĀ 

      {"uid":"1","scopes":[{"name":"_default","uid":"0","collections":[{"name":"pre-qf","uid":"8","maxTTL":0},{"name":"_default","uid":"0","maxTTL":0}]}]}

      5. Stop server on .217 and fail it over unsafely
      6. On resulting .215 node create a collection 'post-qf'
      7. check the manifest

      {"uid":"2","scopes":[{"name":"_default","uid":"0","collections":[{"name":"post-qf","uid":"9","maxTTL":0},{"name":"pre-qf","uid":"8","maxTTL":0},{"name":"_default","uid":"0","maxTTL":0}]}]}

      So UID got only monotonically increased.

      If you were to repeat the same with KV service on .217, then step 7 would yeild

      {"uid":"1002","scopes":[{"name":"_default","uid":"0","collections":[{"name":"post-qf","uid":"1009","maxTTL":0},{"name":"pre-qf","uid":"8","maxTTL":0},{"name":"_default","uid":"0","maxTTL":0}]}]}
      

      here the UID got bumped by 0x1000 as expected.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            artem Artem Stemkovski
            sumedh.basarkod Sumedh Basarkod (Inactive)
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty