Uploaded image for project: 'Couchbase Kubernetes'
  1. Couchbase Kubernetes
  2. K8S-2437

Deletion and recreation of scope groups and collection groups referenced in couchbasebucket and couchbasescope results in error

    XMLWordPrintable

Details

    • 1

    Description

      Operator version: 2.3.0-159

      Steps:

      1. Deploy DAC and Operator
      2. Deploy 3 node couchbase cluster with default bucket
      3. Create a collection group called "collectiongroup0" with yaml:

      ---
      apiVersion: couchbase.com/v2
      kind: CouchbaseCollectionGroup
      metadata:
        name: collectiongroup0
      spec:
        names:
        - bugs
        - lola
      

      4. Create a scope and add this collection group to that scope with yaml:

      ---
      apiVersion: couchbase.com/v2
      kind: CouchbaseScope
      metadata:
        name: scope0
      spec:
        name: scope0
        collections:
          managed: true
          resources:
          - kind: CouchbaseCollectionGroup
            name: collectiongroup0
      

      5. Create a scope group "scopegroup0" with yaml:

      ---
      apiVersion: couchbase.com/v2
      kind: CouchbaseScopeGroup
      metadata:
        name: scopegroup0
      spec:
        names:
        - donald
        - daffy
        collections:
          managed: true
          selector:
            matchLabels:
              collections: antique
      

      6. kubectl edit the couchbasebucket default and add this scope group scopegroup0

      apiVersion: couchbase.com/v2
      kind: CouchbaseBucket
      metadata:
        creationTimestamp: "2021-09-16T18:19:27Z"
        generation: 5
        name: default
        namespace: default
        resourceVersion: "8160569"
        uid: 612acfd8-4dd4-4887-b838-dfdfd3529aad
      spec:
        compressionMode: passive
        conflictResolution: seqno
        evictionPolicy: valueOnly
        ioPriority: low
        memoryQuota: 100Mi
        replicas: 1
        scopes:
          managed: true
          resources:
          - kind: CouchbaseScopeGroup
            name: scopegroup0
      

      7. Delete the collection group (either kubectl delete -f or kubectl delete couchbasecollectiongroup)
      8. Delete the scope group (either kubectl delete -f or kubectl delete couchbasescopegroup)
      9. Try to recreate the collection group - error:

      arunkumarsenthilnathan@Arunkumars-MacBook-Pro couchbase-autonomous-operator-kubernetes_2.3.0-beta1-macos-x86_64 % kubectl create -f collectionsgroup.yaml

      Error from server (InternalError): error when creating "collectionsgroup.yaml": Internal error occurred: failed calling webhook "couchbase-operator-admission.default.svc": Post "https://couchbase-operator-admission.default.svc:443/couchbaseclusters/validate?timeout=10s": EOF

      10. Try to recreate the scope group - error:

      arunkumarsenthilnathan@Arunkumars-MacBook-Pro couchbase-autonomous-operator-kubernetes_2.3.0-beta1-macos-x86_64 % kubectl create -f scopegroups.yaml
      Error from server (InternalError): error when creating "scopegroups.yaml": Internal error occurred: failed calling webhook "couchbase-operator-admission.default.svc": Post "https://couchbase-operator-admission.default.svc:443/couchbaseclusters/validate?timeout=10s": EOF

      Works fine if we kubectl edit and remove the reference of scope group from the bucket and the reference of collection group from the scope - might be expected behavior but filing it for due diligence

      Attachments

        For Gerrit Dashboard: K8S-2437
        # Subject Branch Project Status CR V

        Activity

          simon.murray Simon Murray added a comment -

          Was there no logging from the DAC?  That's not covered by cbopinfo as we have no idea where it is installed so you need to tail it.

          simon.murray Simon Murray added a comment - Was there no logging from the DAC?  That's not covered by cbopinfo as we have no idea where it is installed so you need to tail it.
          simon.murray Simon Murray added a comment -

          Ah my bad, you did do an --all.  If you look at the logs it's pretty obvious where the segfault was, usually a better indicator than whatever is thrown out by the CLI!

          simon.murray Simon Murray added a comment - Ah my bad, you did do an --all.  If you look at the logs it's pretty obvious where the segfault was, usually a better indicator than whatever is thrown out by the CLI!
          lynn.straus Lynn Straus added a comment -

          Adding 2.3.0 fixversion to all 2.3.0-beta tickets as those fixes will be in the GA release (as well as beta)

          lynn.straus Lynn Straus added a comment - Adding 2.3.0 fixversion to all 2.3.0-beta tickets as those fixes will be in the GA release (as well as beta)

          People

            simon.murray Simon Murray
            arunkumar Arunkumar Senthilnathan
            Votes:
            0 Vote for this issue
            Watchers:
            3 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty