Details
-
Bug
-
Resolution: Fixed
-
Major
-
None
-
6 - Kraken Cleanup
-
1
Description
Build 2.4.0-163
couchbase-cluster.yamllogs.jsondump.json attached.
When running performance tests for operator 2.4.0 with multiple buckets (RE: CBSE-12584) we are hitting operator pod CrashLoopBackoff, taking a look at the logs there is the following panic:
{"level":"info","ts":1668782738.4970152,"logger":"kubernetes","msg":"Creating pod","cluster":"default/cb-example-perf","name":"cb-example-perf-0000","image":"registry.gitlab.com/cb-vanilla/server:7.1.0-2556"} |
{"level":"info","ts":1668782738.5043304,"msg":"Observed a panic in reconciler: runtime error: invalid memory address or nil pointer dereference","controller":"couchbase-controller","object":{"name":"cb-example-perf","namespace":"default"},"namespace":"default","name":"cb-example-perf","reconcileID":"8c18eb3b-9a1f-41ee-b113-5d2fbf769cdb"} |
panic: runtime error: invalid memory address or nil pointer dereference [recovered]
|
panic: runtime error: invalid memory address or nil pointer dereference
|
[signal SIGSEGV: segmentation violation code=0x1 addr=0x10 pc=0x150fe59] |
goroutine 366 [running]: |
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile.func1()
|
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:118 +0x1f4 |
panic({0x1788180, 0x27fa1a0}) |
runtime/panic.go:838 +0x207 |
github.com/couchbase/couchbase-operator/pkg/util/k8sutil.applyPodScheduling(...)
|
github.com/couchbase/couchbase-operator/pkg/util/k8sutil/pod_util.go:863 |
github.com/couchbase/couchbase-operator/pkg/util/k8sutil.CreateCouchbasePodSpec(0x203000?, {0x1c1ad18?, 0xc000375c80?}, 0xc000227600?, {0x3, {0xc000d379b0, 0x4}, {0xc000910640, 0x1, 0x1}, ...}, ...) |
github.com/couchbase/couchbase-operator/pkg/util/k8sutil/pod_util.go:807 +0xbf9 |
github.com/couchbase/couchbase-operator/pkg/util/k8sutil.CreateCouchbasePod({0x7?, 0xc000e8ccc0?}, 0x2?, {0x1c15a78, 0xc00000f240}, 0xc000227600, {0x1c1ad18, 0xc000375c80}, {0x3, {0xc000d379b0, ...}, ...}) |
github.com/couchbase/couchbase-operator/pkg/util/k8sutil/pod_util.go:90 +0x3ba |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).createPod(0xc000d10000, {0x1c14fb0, 0xc0003749c0}, {0x1c1ad18, 0xc000375c80}, {0x3, {0xc000d379b0, 0x4}, {0xc000910640, 0x1, ...}, ...}, ...) |
github.com/couchbase/couchbase-operator/pkg/cluster/pod.go:37 +0x131 |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).createMember(0xc000d10000, {0x3, {0xc000d379b0, 0x4}, {0xc000910640, 0x1, 0x1}, {0x0, 0x0, 0x0}, ...}) |
github.com/couchbase/couchbase-operator/pkg/cluster/member.go:168 +0x2c6 |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).createInitialMember(0xc000d10000) |
github.com/couchbase/couchbase-operator/pkg/cluster/member.go:317 +0x29b |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).create(0xc000d10000) |
github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:326 +0x1b6 |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).reconcile(0xc000d10000) |
github.com/couchbase/couchbase-operator/pkg/cluster/reconcile.go:150 +0x585 |
github.com/couchbase/couchbase-operator/pkg/cluster.(*Cluster).runReconcile(0xc000d10000) |
github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:488 +0x4e9 |
github.com/couchbase/couchbase-operator/pkg/cluster.New({{0x7ffcef198591?, 0xc0001a6ec0?}, {0x19aaaad?, 0x19b6604?}}, 0xc000227600) |
github.com/couchbase/couchbase-operator/pkg/cluster/cluster.go:181 +0x814 |
github.com/couchbase/couchbase-operator/pkg/controller.(*CouchbaseClusterReconciler).Reconcile(0xc00091b300, {0xc000700000?, 0xc0001fc780?}, {{{0xc000d37846, 0x7}, {0xc000d37850, 0xf}}}) |
github.com/couchbase/couchbase-operator/pkg/controller/controller.go:75 +0x745 |
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Reconcile(0x1c14f40?, {0x1c14fe8?, 0xc0001fc780?}, {{{0xc000d37846?, 0x18dd7e0?}, {0xc000d37850?, 0x4041f4?}}}) |
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:121 +0xc8 |
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).reconcileHandler(0xc000643400, {0x1c14f40, 0xc00091aa80}, {0x17fb6c0?, 0xc000528740?}) |
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:320 +0x33c |
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).processNextWorkItem(0xc000643400, {0x1c14f40, 0xc00091aa80}) |
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:273 +0x1d9 |
sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2.2() |
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:234 +0x85 |
created by sigs.k8s.io/controller-runtime/pkg/internal/controller.(*Controller).Start.func2
|
sigs.k8s.io/controller-runtime@v0.13.1/pkg/internal/controller/controller.go:230 +0x325 |
This currently only seems to crop up during runs where the number of created buckets > 1. Prior to bumping the build to -163 was hitting autocompaction startTime error:
{{}}
unexpected status code: request failed POST http://cb-example-perf-0002.cb-example-perf.default.svc:8091/controller/setAutoCompaction 400 Bad Request: {"errors":{"allowedTimePeriod":"Start time must not be the same as end time"}} |
{}After updating to new build this no longer occurs, but the panic does.
Attachments
Issue Links
- relates to
-
K8S-2917 Close nil reference gap coverage on spec.pod.affinity
- Open