Uploaded image for project: 'Couchbase Kubernetes'
  1. Couchbase Kubernetes
  2. K8S-556

Couchbase-Operator Support for non-dynamic provisioning of persistent volumes

    XMLWordPrintable

Details

    • Improvement
    • Status: Closed
    • Major
    • Resolution: Fixed
    • None
    • 1.2.0
    • operator

    Description

      Currently the Couchbase operator supports only provisioning via Storage classes which is dynamic in nature. We will need to support the scenarios where the storage classes are not available (For example: NFS) and where a user will create PVs and PVCs manually.

       

       

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          Atm local storage requires manual provisioning rather than dynamic.

          From what I've been reading this may just be while it is in beta, but it may also be a permanent requirement, so this probably blocks local storage support.

          matt.carabine Matt Carabine added a comment - Atm local storage requires manual provisioning rather than dynamic. From what I've been reading this may just be while it is in beta, but it may also be a permanent requirement, so this probably blocks local storage support.
          simon.murray Simon Murray added a comment -

          Creating persistent volumes is an admin level function, so I'm almost certain that we will not support this.  However that said the administrator can create persistent volumes, tag them with a storage class and we can claim them without any code modifications.

          simon.murray Simon Murray added a comment - Creating persistent volumes is an admin level function, so I'm almost certain that we will not support this.  However that said the administrator can create persistent volumes, tag them with a storage class and we can claim them without any code modifications.

          Simon Murray, I tried that without success in a minishift env. Maybe you can help.

           

          This is what I did:

          1. Crete a folder in the minishift vm

           

          minishift ssh
          sudo -i
          mkdir -p /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data
          chmod 777 -R /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data
          

           

          2. Create a PV and tag it with name "slow"

           

          apiVersion: v1
          kind: PersistentVolume
          metadata:
           name: couchbase.data
          spec:
           capacity:
            storage: 10Gi
           accessModes:
            - ReadWriteOnce
           storageClassName: slow
           hostPath:
            path: /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data
          

           

          3. Test the PV by creating a PVC. It worked fine:

           

          apiVersion: v1
          kind: PersistentVolumeClaim
          metadata:
            name: my.test.claim
          spec:
            accessModes:
              - ReadWriteOnce
            storageClassName: slow
            resources:
              requests:
                storage: 100Mi
          

           

          4. Create a couchbase cluster pointing to this storageclass:

           

          [... skipped ...]
            servers:
              - size: 3
                name: data_index_query
                services:
                  - data
                  - index
                  - query
                pod:
                  volumeMounts:
                    default: couchbase.product
                    data:  couchbase.data
                    index: couchbase.index
            volumeClaimTemplates:
              - metadata:
                  name: couchbase.product
                spec:
                  storageClassName: "slow"
                  resources:
                    requests:
                      storage: 758Mi
              - metadata:
                  name: couchbase.data
                spec:
                  storageClassName: "slow"
                  resources:
                    requests:
                      storage: 512Mi
              - metadata:
                  name: couchbase.index
                spec:
                  storageClassName: "slow"
                    requests:
                      storage: 256Mi
                  resources:

           

          5. When deploying the cluster, operator fails with this message:

          msg="Cluster setup failed: fail to create member's pod (cb-local-storage-dev-0000): storageclasses.storage.k8s.io \"slow\" not found"

           

          Any clue?

           

           

           

          manuel.hurtado Manuel Hurtado (Inactive) added a comment - Simon Murray , I tried that without success in a minishift env. Maybe you can help.   This is what I did: 1. Crete a folder in the minishift vm   minishift ssh sudo -i mkdir -p /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data chmod 777 -R /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data   2. Create a PV and tag it with name "slow"   apiVersion: v1 kind: PersistentVolume metadata: name: couchbase.data spec: capacity:   storage: 10Gi accessModes:   - ReadWriteOnce storageClassName: slow hostPath:   path: /mnt/vda1/var/lib/minishift/openshift.local.volumes/pv/couchbase/data   3. Test the PV by creating a PVC. It worked fine:   apiVersion: v1 kind: PersistentVolumeClaim metadata:   name: my.test.claim spec:   accessModes:     - ReadWriteOnce   storageClassName: slow   resources:     requests:       storage: 100Mi   4. Create a couchbase cluster pointing to this storageclass:   [... skipped ...]   servers:     - size: 3       name: data_index_query       services:         - data         - index         - query       pod:         volumeMounts:           default : couchbase.product           data:  couchbase.data           index: couchbase.index   volumeClaimTemplates:     - metadata:         name: couchbase.product       spec:         storageClassName: "slow"         resources:           requests:             storage: 758Mi     - metadata:         name: couchbase.data       spec:         storageClassName: "slow"         resources:           requests:             storage: 512Mi     - metadata:         name: couchbase.index       spec:         storageClassName: "slow"           requests:             storage: 256Mi         resources:   5. When deploying the cluster, operator fails with this message: msg= "Cluster setup failed: fail to create member's pod (cb-local-storage-dev-0000): storageclasses.storage.k8s.io \"slow\" not found"   Any clue?      
          simon.murray Simon Murray added a comment -

          Yeah, we check that the storage class actually exists before doing anything, in this case an explicit StorageClass resource doesn't, but there is a PersistentVolume with that storage class.  Now rather than making our code more convoluted and having to check more things I think there was talk of optionally disabling this check as it does access cluster level resources which some customers aren't too happy about.

          simon.murray Simon Murray added a comment - Yeah, we check that the storage class actually exists before doing anything, in this case an explicit StorageClass resource doesn't, but there is a PersistentVolume with that storage class.  Now rather than making our code more convoluted and having to check more things I think there was talk of optionally disabling this check as it does access cluster level resources which some customers aren't too happy about.

          Description for release notes:

          Summary: Known Issue Only dynamic volume provisioning via storage classes is supported.

          eric.schneider Eric Schneider (Inactive) added a comment - Description for release notes: Summary: Known Issue Only dynamic volume provisioning via storage classes is supported.

          Removing the check for storageclass since the the volumeClaimTemplates can be evaluated without them as is the case here.  Also worth noting, this spec actually requires 9 claimTemplates since there is a request for 3 nodes each mounting 3 volumes.

          tommie Tommie McAfee added a comment - Removing the check for storageclass since the the volumeClaimTemplates can be evaluated without them as is the case here.  Also worth noting, this spec actually requires 9 claimTemplates since there is a request for 3 nodes each mounting 3 volumes.

          After some quick experimentation it's best to just create a 'local' storageclass, this way users can create as many PV's they want and tag them with the class.  Verified that the use case here works for current codebase with the following storageclass:

          apiVersion: storage.k8s.io/v1
          kind: StorageClass
          metadata:
            name: slow
          provisioner: local 
          volumeBindingMode: Immediate 

           

           

          tommie Tommie McAfee added a comment - After some quick experimentation it's best to just create a 'local' storageclass, this way users can create as many PV's they want and tag them with the class.  Verified that the use case here works for current codebase with the following storageclass: apiVersion: storage.k8s.io/v1 kind: StorageClass metadata:   name: slow provisioner: local  volumeBindingMode: Immediate    
          simon.murray Simon Murray added a comment -

          Tommie can we close this enhancement out?  Will make Lynn a lot happier (and the meeting shorter) if we only have the one

          simon.murray Simon Murray added a comment - Tommie can we close this enhancement out?  Will make Lynn a lot happier (and the meeting shorter) if we only have the one

          Yep, thanks for reminder

          tommie Tommie McAfee added a comment - Yep, thanks for reminder

          People

            tommie Tommie McAfee
            sindhura.palakodety Sindhura Palakodety (Inactive)
            Votes:
            1 Vote for this issue
            Watchers:
            7 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty