Details
-
Bug
-
Resolution: Duplicate
-
Major
-
1.0.0
-
None
-
K8s running on Azure AKS
Description
Before PV volume support in operator behavior was very consistent, say we have pod0, pod1, and pod2. If we lose say pod2, we will get a new pod with name pod3.
In 1.0, looks like behavior has changed, if we lose a pod say cb-op-aks-demo-0001, operator first tries to create pod with same name "cb-op-aks-demo-0001", gives up
time="2018-09-10T20:28:35Z" level=info msg="An auto-failover has taken place" cluster-name=cb-op-aks-demo module=cluster
time="2018-09-10T20:28:36Z" level=info msg="Creating a pod (cb-op-aks-demo-0001) running Couchbase enterprise-5.5.1" cluster-name=cb-op-aks-demo module=cluster
time="2018-09-10T20:30:36Z" level=error msg="node http://cb-op-aks-demo-0001.cb-op-aks-demo.default.svc:8091 could not be recovered: context deadline exceeded" cluster-name=cb-op-aks-demo module=cluster
Then it tries to create a new one
time="2018-09-10T20:30:36Z" level=info msg="Creating a pod (cb-op-aks-demo-0005) running Couchbase enterprise-5.5.1" cluster-name=cb-op-aks-demo module=cluster
time="2018-09-10T20:34:00Z" level=info msg="added member (cb-op-aks-demo-0005)" cluster-name=cb-op-aks-demo module=cluster
Is this behavior expected?
If this is expected then rebalance operation takes time, and one needs to wait for long time.
I will attach detailed logs next.
Attachments
Issue Links
- duplicates
-
K8S-614 panic while deploying cluster with tls
- Resolved