Uploaded image for project: 'Couchbase Kubernetes'
  1. Couchbase Kubernetes
  2. K8S-2332

No nodeSelector for Couchbase pod in Helm Chart

    XMLWordPrintable

Details

    • Improvement
    • Status: Resolved
    • Minor
    • Resolution: Fixed
    • None
    • None
    • helm, kubernetes
    • None
    • 1

    Description

      In standard K8s config we have the following:

          pod:
            metadata:
              labels:
                couchbase_services: all
              annotations:
                couchbase.acme.com: production
            spec:
              nodeSelector:
                instanceType: large
      

      This spec for nodeSelector is not present in the Helm Chart

      https://github.com/couchbase-partners/helm-charts/blob/master/charts/couchbase-operator/values-all.yaml

      Likewise, no documentation for it:
      https://docs.couchbase.com/operator/current/helm-couchbase-config.html

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          It's not documented in the CRD either because it's just the standard k8s pod spec: https://docs.couchbase.com/operator/current/resource/couchbasecluster.html#couchbaseclusters-spec-servers-pod 

          We explicitly remove some of the common stuff from the auto-generation, including the pod spec: https://github.com/couchbase-partners/helm-charts/blob/master/tools/value-generation/gen.py#L167 

          It is all still there but it is part of the standard k8s definition (which is version dependent as well). There is a lot in the pod spec and it is common so we do not document everything as the official kubernetes docs are a better source for this.

          There are also some constraints on pod definition by the operator plus the impact of setting them can trigger a pod update (restart). 

          What's the specific ask here? Do we need to document the usage of node selector for pods (without helm as well)? 

          patrick.stephens Patrick Stephens (Inactive) added a comment - It's not documented in the CRD either because it's just the standard k8s pod spec: https://docs.couchbase.com/operator/current/resource/couchbasecluster.html#couchbaseclusters-spec-servers-pod   We explicitly remove some of the common stuff from the auto-generation, including the pod spec: https://github.com/couchbase-partners/helm-charts/blob/master/tools/value-generation/gen.py#L167   It is all still there but it is part of the standard k8s definition (which is version dependent as well). There is a lot in the pod spec and it is common so we do not document everything as the official kubernetes docs are a better source for this. There are also some constraints on pod definition by the operator plus the impact of setting them can trigger a pod update (restart).  What's the specific ask here? Do we need to document the usage of node selector for pods (without helm as well)? 

          Note that the usage of server groups directly affects the node selector: the operator sets some explicit values for the failure domain for example.

          Pod specification can be used with helm as it can with the non-helm approach, we just don't document all the common stuff: https://github.com/couchbase-partners/helm-charts/blob/master/charts/couchbase-operator/values-all.yaml#L1647-L1655 

          We can likely make that more obvious though in the docs as an improvement.

          patrick.stephens Patrick Stephens (Inactive) added a comment - - edited Note that the usage of server groups directly affects the node selector: the operator sets some explicit values for the failure domain for example. Pod specification can be used with helm as it can with the non-helm approach, we just don't document all the common stuff: https://github.com/couchbase-partners/helm-charts/blob/master/charts/couchbase-operator/values-all.yaml#L1647-L1655   We can likely make that more obvious though in the docs as an improvement.
          tin.tran Tin Tran added a comment -

          Thank you Patrick Stephens

          I currently ask the customer to set the nodeSelector like this

            servers:
           
                pod:
                  spec:
                    containers: []
                    nodeSelector:
                       somenode: somenode
          

          Would be good to have an example here https://github.com/couchbase-partners/helm-charts/tree/master/charts/couchbase-operator/examples

          Thank you Patrick.

          tin.tran Tin Tran added a comment - Thank you Patrick Stephens I currently ask the customer to set the nodeSelector like this servers:   pod: spec: containers: [] nodeSelector: somenode: somenode Would be good to have an example here https://github.com/couchbase-partners/helm-charts/tree/master/charts/couchbase-operator/examples Thank you Patrick.

          Confirmed this should work, although make sure to specify the name of the server above.

          I tested this with a KIND cluster like so:

          kind: Cluster
          apiVersion: kind.x-k8s.io/v1alpha4
          nodes:
          - role: control-plane
          - role: worker
            labels:
                node-for-cb: donotselectme
          - role: worker
            labels:
                node-for-cb: donotselectme
          - role: worker
            labels:
                node-for-cb: selectme
          

          Then used the following values:

          cluster:
            image: couchbase/server:6.6.2
            servers:
              default:
                size: 1
                pod:
                  spec:
                    nodeSelector:
                      node-for-cb: selectme
          

          This then shows it being deployed correctly using the node selector:

          kubectl get nodes --show-labels          
          NAME                 STATUS   ROLES                  AGE     VERSION   LABELS
          kind-control-plane   Ready    control-plane,master   7m31s   v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-control-plane,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers=
          kind-worker          Ready    <none>                 7m3s    v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker,kubernetes.io/os=linux,node-for-cb=donotselectme
          kind-worker2         Ready    <none>                 7m3s    v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker2,kubernetes.io/os=linux,node-for-cb=selectme
          kind-worker3         Ready    <none>                 7m3s    v1.21.1   beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker3,kubernetes.io/os=linux,node-for-cb=donotselectme
          patrickstephens@Patricks-MBP couchbase-operator % kubectl get pods --all-namespaces -o wide
          NAMESPACE            NAME                                                   READY   STATUS    RESTARTS   AGE     IP           NODE                 NOMINATED NODE   READINESS GATES
          default              test-couchbase-admission-controller-68cf869d5c-f65j5   1/1     Running   0          3m19s   10.244.2.2   kind-worker2         <none>           <none>
          default              test-couchbase-cluster-0000                            1/1     Running   0          2m50s   10.244.2.3   kind-worker2         <none>           1/1
          default              test-couchbase-operator-77595c947f-f4pgw               1/1     Running   0          3m19s   10.244.1.2   kind-worker          <none>           <none>
          

          As you can see it deploys to the kind-worker2 node which is the only one with the correct label.
          Looking at the pod details it does indeed have the selector I specified.

          kubectl describe pod test-couchbase-cluster-0000 
          Name:         test-couchbase-cluster-0000
          ...
          Node-Selectors:              node-for-cb=selectme
          ...
          

          patrick.stephens Patrick Stephens (Inactive) added a comment - Confirmed this should work, although make sure to specify the name of the server above. I tested this with a KIND cluster like so: kind: Cluster apiVersion: kind.x-k8s.io/v1alpha4 nodes: - role: control-plane - role: worker labels: node-for-cb: donotselectme - role: worker labels: node-for-cb: donotselectme - role: worker labels: node-for-cb: selectme Then used the following values: cluster: image: couchbase/server:6.6.2 servers: default: size: 1 pod: spec: nodeSelector: node-for-cb: selectme This then shows it being deployed correctly using the node selector: kubectl get nodes --show-labels NAME STATUS ROLES AGE VERSION LABELS kind-control-plane Ready control-plane,master 7m31s v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-control-plane,kubernetes.io/os=linux,node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.kubernetes.io/exclude-from-external-load-balancers= kind-worker Ready <none> 7m3s v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker,kubernetes.io/os=linux,node-for-cb=donotselectme kind-worker2 Ready <none> 7m3s v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker2,kubernetes.io/os=linux,node-for-cb=selectme kind-worker3 Ready <none> 7m3s v1.21.1 beta.kubernetes.io/arch=amd64,beta.kubernetes.io/os=linux,kubernetes.io/arch=amd64,kubernetes.io/hostname=kind-worker3,kubernetes.io/os=linux,node-for-cb=donotselectme patrickstephens@Patricks-MBP couchbase-operator % kubectl get pods --all-namespaces -o wide NAMESPACE NAME READY STATUS RESTARTS AGE IP NODE NOMINATED NODE READINESS GATES default test-couchbase-admission-controller-68cf869d5c-f65j5 1/1 Running 0 3m19s 10.244.2.2 kind-worker2 <none> <none> default test-couchbase-cluster-0000 1/1 Running 0 2m50s 10.244.2.3 kind-worker2 <none> 1/1 default test-couchbase-operator-77595c947f-f4pgw 1/1 Running 0 3m19s 10.244.1.2 kind-worker <none> <none> As you can see it deploys to the kind-worker2 node which is the only one with the correct label. Looking at the pod details it does indeed have the selector I specified. kubectl describe pod test-couchbase-cluster-0000 Name: test-couchbase-cluster-0000 ... Node-Selectors: node-for-cb=selectme ...
          patrick.stephens Patrick Stephens (Inactive) added a comment - I've added this an example: https://github.com/couchbase-partners/helm-charts/pull/61  

          Added example now.

          patrick.stephens Patrick Stephens (Inactive) added a comment - Added example now.

          If you're happy (and the user) then please close or let me know any more updates you want.

          patrick.stephens Patrick Stephens (Inactive) added a comment - If you're happy (and the user) then please close or let me know any more updates you want.

          People

            tin.tran Tin Tran
            tin.tran Tin Tran
            Votes:
            0 Vote for this issue
            Watchers:
            2 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty