Uploaded image for project: 'Couchbase Kubernetes'
  1. Couchbase Kubernetes
  2. K8S-2531

Fluent Bit: Set hostname label to the pod name as seen by CBS

    XMLWordPrintable

Details

    • Improvement
    • Status: Closed
    • Minor
    • Resolution: Fixed
    • None
    • 2.3.0
    • logging
    • None
    • 5: Helm, backup, Marketplace, 50: Validation/Enforcement, 1: Recovery to productivity, 3: SBEE, Multi-Cert
    • 1

    Description

      Currently the hostname label is set to just the pod name, e.g. cb7-0000, whereas CBS sees the host names as pod.cluster.namespace.svc, for example cb7-0000.cb7.default.svc. For consistency it'd be good to have hostname match that.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          Covered in logging tests

          arunkumar Arunkumar Senthilnathan added a comment - Covered in logging tests

          Build couchbase-fluent-bit-1.1.3-111 contains couchbase-fluent-bit commit 25c7f11 with commit message:
          K8S-2531: Set pod hostname label

          build-team Couchbase Build Team added a comment - Build couchbase-fluent-bit-1.1.3-111 contains couchbase-fluent-bit commit 25c7f11 with commit message: K8S-2531 : Set pod hostname label

          Let's go with hostname then, it's just a helper currently and not being used by anything. May as well make it accurate.

          patrick.stephens Patrick Stephens (Inactive) added a comment - Let's go with hostname then, it's just a helper currently and not being used by anything. May as well make it accurate.
          aaron.benton Aaron Benton added a comment -

          Patrick Stephens no preference.  I've used node, hostname and I have a few customers that use instance.  This is a label added automatically by prometheus at scrape time.  Then we just added a relabel config in prometheus to strip the port off, this way metrics from various jobs can still be correlated from the same instance.  

          aaron.benton Aaron Benton added a comment - Patrick Stephens  no preference.  I've used node, hostname and I have a few customers that use instance.  This is a label added automatically by prometheus at scrape time.  Then we just added a relabel config in prometheus to strip the port off, this way metrics from various jobs can still be correlated from the same instance.  

          Aaron Benton any preference or comments?

          patrick.stephens Patrick Stephens (Inactive) added a comment - Aaron Benton  any preference or comments?

          Personally I'd argue for hostname as it's not K8s specific, meaning the configuration (and thus LogQL) can be the same whether on k8s or on-prem, whereas if we used couchbase.node, we'd have the situation where you need to use hostname for on-prem but couchbase.node for k8s.

          marks.polakovs Marks Polakovs added a comment - Personally I'd argue for hostname as it's not K8s specific, meaning the configuration (and thus LogQL) can be the same whether on k8s or on-prem, whereas if we used couchbase.node , we'd have the situation where you need to use hostname for on-prem but couchbase.node for k8s.

          Hostname would also be fine I think and then switch to that for Loki.

          patrick.stephens Patrick Stephens (Inactive) added a comment - Hostname would also be fine I think and then switch to that for Loki.

          I think the right way to do this is using the modify filter with conditional rules to overwrite the couchbase.node key:

          [FILTER]
              Name        modify
              Match        couchbase.log.*
              Condition   Key_Exists                              pod['namespace']
              Condition   Key_Exists                              couchbase['cluster']
              Condition   Key_Exists                              couchbase['node']
              Set         couchbase.node   $couchbase['node'].$couchbase['cluster'].$pod['namespace']
          

          Not sure if you can set a nested field that way so confirm, it might also be better to set up a new field for it but really we want a simple one-size-fits-all for Loki labelling with both on-premise and CAO clusters.

          patrick.stephens Patrick Stephens (Inactive) added a comment - - edited I think the right way to do this is using the modify filter with conditional rules to overwrite the couchbase.node key: [FILTER] Name modify Match couchbase.log.* Condition Key_Exists pod['namespace'] Condition Key_Exists couchbase['cluster'] Condition Key_Exists couchbase['node'] Set couchbase.node $couchbase['node'].$couchbase['cluster'].$pod['namespace'] Not sure if you can set a nested field that way so confirm, it might also be better to set up a new field for it but really we want a simple one-size-fits-all for Loki labelling with both on-premise and CAO clusters.

          People

            Alex.emery Alex Emery
            marks.polakovs Marks Polakovs
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty