Description
$ E2E_TEST=TestPauseOperator make test-indv
go test github.com/couchbase/couchbase-operator/test/e2e -run TestPauseOperator \
-v -timeout 60m --race --kubeconfig /Users/mikewied/.kube/config --operator-image \
couchbase/couchbase-operator:v1 --namespace default --deployment-spec \
/Users/mikewied/go/src/github.com/couchbase/couchbase-operator/example/deployment.yaml
INFO[0030] couchbase operator created successfully
INFO[0030] e2e setup successfully
=== RUN TestPauseOperator
— FAIL: TestPauseOperator (185.38s)
crd_util.go:41: creating couchbase cluster: test-couchbase-lhc5n
util.go:382: 2018-01-21 14:55:08.072272172 -0800 PST m=+30.618708759 waiting size (3), healthy couchbase members: names ([])
util.go:382: 2018-01-21 14:55:18.119723273 -0800 PST m=+40.666159860 waiting size (3), healthy couchbase members: names ([])
util.go:382: 2018-01-21 14:55:28.073316981 -0800 PST m=+50.619753568 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000])
util.go:382: 2018-01-21 14:55:38.073108758 -0800 PST m=+60.619545345 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000])
util.go:382: 2018-01-21 14:55:48.074505338 -0800 PST m=+70.620941925 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000])
util.go:382: 2018-01-21 14:55:58.074134729 -0800 PST m=+80.620571316 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000])
util.go:382: 2018-01-21 14:56:08.073309524 -0800 PST m=+90.619746111 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000])
util.go:382: 2018-01-21 14:56:18.074539509 -0800 PST m=+100.620976096 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000 test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002])
util.go:382: 2018-01-21 14:56:18.078390155 -0800 PST m=+100.624826742 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000 test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002])
operator_test.go:36: Pausing operator...
operator_test.go:46: Killing pod...
util.go:425: Killing pods: [test-couchbase-lhc5n-0000]
util.go:382: 2018-01-21 14:56:23.243268871 -0800 PST m=+105.789705458 waiting size (2), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:56:23.256275692 -0800 PST m=+105.802712279 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:56:33.252621902 -0800 PST m=+115.799058489 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:56:43.253289835 -0800 PST m=+125.799726422 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:56:53.251387339 -0800 PST m=+135.797823926 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:03.250154295 -0800 PST m=+145.796590882 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:13.250114507 -0800 PST m=+155.796551094 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:23.253237239 -0800 PST m=+165.799673826 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:33.252694829 -0800 PST m=+175.799131416 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:43.253240859 -0800 PST m=+185.799677446 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:57:53.249320621 -0800 PST m=+195.795757208 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
util.go:382: 2018-01-21 14:58:03.252571193 -0800 PST m=+205.799007780 waiting size (3), couchbase pods: names ([test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002]), nodes ([minikube minikube])
operator_test.go:54: Resuming operator...
operator_test.go:59: Waiting for recovery...
util.go:382: 2018-01-21 14:58:03.333948951 -0800 PST m=+205.880385538 waiting size (3), healthy couchbase members: names ([test-couchbase-lhc5n-0000 test-couchbase-lhc5n-0001 test-couchbase-lhc5n-0002])
util.go:382: 2018-01-21 14:58:03.339066225 -0800 PST m=+205.885502812 Cluster Status Conditions: ([\{Available True 2018-01-21T22:55:35Z 2018-01-21T22:55:35Z Cluster available } \{Balanced True 2018-01-21T22:56:13Z 2018-01-21T22:56:13Z Cluster is balanced Data is equally distributed across all nodes in the cluster}])
util.go:382: 2018-01-21 14:58:03.339181522 -0800 PST m=+205.885618109 available (true), balanced (true)
operator_test.go:76: Expected events to be:
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0000 added to cluster
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0001 added to cluster
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0002 added to cluster
Type: Normal | Reason: RebalanceStarted | Message: A rebalance has been started to balance data across the cluster
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0003 added to cluster
but got:
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0000 added to cluster
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0001 added to cluster
Type: Normal | Reason: NewMemberAdded | Message: New member test-couchbase-lhc5n-0002 added to cluster
Type: Normal | Reason: RebalanceStarted | Message: A rebalance has been started to balance data across the cluster
crd_util.go:73: deleting couchbase cluster: test-couchbase-lhc5n
FAIL
INFO[0300] e2e teardown successfully
exit status 1