Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-45053

[System Test] Eventing Rebalance failed

    XMLWordPrintable

Details

    Description

      System Test:

      Eventing handlers deployment{

      [2021-03-17T16:06:53-07:00, sequoiatools/eventing:7.0:a87aaf] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers
      [2021-03-17T16:07:01-07:00, sequoiatools/eventing:7.0:dfee66] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql
      [2021-03-17T16:07:08-07:00, sequoiatools/eventing:7.0:dd9a54] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm
      [2021-03-17T16:07:17-07:00, sequoiatools/eventing:7.0:ef5686] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl
      [2021-03-17T16:07:25-07:00, sequoiatools/eventing:7.0:b3959c] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o deploy
      [2021-03-17T16:07:30-07:00, sequoiatools/eventing:7.0:d2d34d] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o wait_for_state --state deployed
      

      – At this point of time there should be not data in collections.

      Current step

      [2021-03-17T21:13:59-07:00, sequoiatools/couchbase-cli:7.0:e63846] server-add -c 172.23.104.232:8091 --server-add https://172.23.104.244 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
      [2021-03-17T21:14:11-07:00, sequoiatools/couchbase-cli:7.0:d5525c] rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password
       
      Error occurred on container - sequoiatools/couchbase-cli:7.0:[rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password]
       
      docker logs d5525c
      docker start d5525c
       
      *Unable to display progress bar on this os
      JERROR: Rebalance failed. See logs for detailed reason. You can try again.
      

      Rebalance Failed -

      Rebalance exited with reason {service_rebalance_failed,eventing,
      {worker_died,
      {'EXIT',<0.1247.496>,
      {{badmatch,
      {error,
      {bad_nodes,eventing,prepare_rebalance,
      [{'ns_1@172.23.104.214',
      {error,
      {unknown_error,
      <<"Some apps are deploying or resuming on nodeId: d0c98164b79fbb3e57b4808d7c71ef3b Apps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]">>}}}]}}},
      [{service_rebalancer,rebalance_worker,1,
      [{file,"src/service_rebalancer.erl"},
      {line,164}]},
      {proc_lib,init_p,3,
      [{file,"proc_lib.erl"},{line,234}]}]}}}}.
      Rebalance Operation Id = 6c309e9434208e5789108f241ad43a4b
      

      – All 4 handlers above are undeployed.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            ritam.sharma Ritam Sharma created issue -
            ritam.sharma Ritam Sharma made changes -
            Field Original Value New Value
            Description System Test:

            Current step
            {noformat}
            [2021-03-17T21:13:59-07:00, sequoiatools/couchbase-cli:7.0:e63846] server-add -c 172.23.104.232:8091 --server-add https://172.23.104.244 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
            [2021-03-17T21:14:11-07:00, sequoiatools/couchbase-cli:7.0:d5525c] rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password


            Error occurred on container - sequoiatools/couchbase-cli:7.0:[rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password]

            docker logs d5525c
            docker start d5525c

            *Unable to display progress bar on this os
            JERROR: Rebalance failed. See logs for detailed reason. You can try again.
            {noformat}

            Rebalance Failed -

            {noformat}
            Rebalance exited with reason {service_rebalance_failed,eventing,
            {worker_died,
            {'EXIT',<0.1247.496>,
            {{badmatch,
            {error,
            {bad_nodes,eventing,prepare_rebalance,
            [{'ns_1@172.23.104.214',
            {error,
            {unknown_error,
            <<"Some apps are deploying or resuming on nodeId: d0c98164b79fbb3e57b4808d7c71ef3b Apps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]">>}}}]}}},
            [{service_rebalancer,rebalance_worker,1,
            [{file,"src/service_rebalancer.erl"},
            {line,164}]},
            {proc_lib,init_p,3,
            [{file,"proc_lib.erl"},{line,234}]}]}}}}.
            Rebalance Operation Id = 6c309e9434208e5789108f241ad43a4b
            {noformat}
            System Test:

            Eventing handlers deployment{
            {noformat}
            [2021-03-17T16:06:53-07:00, sequoiatools/eventing:7.0:a87aaf] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers
            [2021-03-17T16:07:01-07:00, sequoiatools/eventing:7.0:dfee66] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql
            [2021-03-17T16:07:08-07:00, sequoiatools/eventing:7.0:dd9a54] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm
            [2021-03-17T16:07:17-07:00, sequoiatools/eventing:7.0:ef5686] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl
            [2021-03-17T16:07:25-07:00, sequoiatools/eventing:7.0:b3959c] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o deploy
            [2021-03-17T16:07:30-07:00, sequoiatools/eventing:7.0:d2d34d] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o wait_for_state --state deployed
            {noformat}
            -- At this point of time there should be not data in collections.

            Current step
            {noformat}
            [2021-03-17T21:13:59-07:00, sequoiatools/couchbase-cli:7.0:e63846] server-add -c 172.23.104.232:8091 --server-add https://172.23.104.244 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
            [2021-03-17T21:14:11-07:00, sequoiatools/couchbase-cli:7.0:d5525c] rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password


            Error occurred on container - sequoiatools/couchbase-cli:7.0:[rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password]

            docker logs d5525c
            docker start d5525c

            *Unable to display progress bar on this os
            JERROR: Rebalance failed. See logs for detailed reason. You can try again.
            {noformat}

            Rebalance Failed -

            {noformat}
            Rebalance exited with reason {service_rebalance_failed,eventing,
            {worker_died,
            {'EXIT',<0.1247.496>,
            {{badmatch,
            {error,
            {bad_nodes,eventing,prepare_rebalance,
            [{'ns_1@172.23.104.214',
            {error,
            {unknown_error,
            <<"Some apps are deploying or resuming on nodeId: d0c98164b79fbb3e57b4808d7c71ef3b Apps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]">>}}}]}}},
            [{service_rebalancer,rebalance_worker,1,
            [{file,"src/service_rebalancer.erl"},
            {line,164}]},
            {proc_lib,init_p,3,
            [{file,"proc_lib.erl"},{line,234}]}]}}}}.
            Rebalance Operation Id = 6c309e9434208e5789108f241ad43a4b
            {noformat}

            -- All 4 handlers above are not undeployed.
            ritam.sharma Ritam Sharma made changes -
            Description System Test:

            Eventing handlers deployment{
            {noformat}
            [2021-03-17T16:06:53-07:00, sequoiatools/eventing:7.0:a87aaf] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers
            [2021-03-17T16:07:01-07:00, sequoiatools/eventing:7.0:dfee66] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql
            [2021-03-17T16:07:08-07:00, sequoiatools/eventing:7.0:dd9a54] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm
            [2021-03-17T16:07:17-07:00, sequoiatools/eventing:7.0:ef5686] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl
            [2021-03-17T16:07:25-07:00, sequoiatools/eventing:7.0:b3959c] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o deploy
            [2021-03-17T16:07:30-07:00, sequoiatools/eventing:7.0:d2d34d] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o wait_for_state --state deployed
            {noformat}
            -- At this point of time there should be not data in collections.

            Current step
            {noformat}
            [2021-03-17T21:13:59-07:00, sequoiatools/couchbase-cli:7.0:e63846] server-add -c 172.23.104.232:8091 --server-add https://172.23.104.244 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
            [2021-03-17T21:14:11-07:00, sequoiatools/couchbase-cli:7.0:d5525c] rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password


            Error occurred on container - sequoiatools/couchbase-cli:7.0:[rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password]

            docker logs d5525c
            docker start d5525c

            *Unable to display progress bar on this os
            JERROR: Rebalance failed. See logs for detailed reason. You can try again.
            {noformat}

            Rebalance Failed -

            {noformat}
            Rebalance exited with reason {service_rebalance_failed,eventing,
            {worker_died,
            {'EXIT',<0.1247.496>,
            {{badmatch,
            {error,
            {bad_nodes,eventing,prepare_rebalance,
            [{'ns_1@172.23.104.214',
            {error,
            {unknown_error,
            <<"Some apps are deploying or resuming on nodeId: d0c98164b79fbb3e57b4808d7c71ef3b Apps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]">>}}}]}}},
            [{service_rebalancer,rebalance_worker,1,
            [{file,"src/service_rebalancer.erl"},
            {line,164}]},
            {proc_lib,init_p,3,
            [{file,"proc_lib.erl"},{line,234}]}]}}}}.
            Rebalance Operation Id = 6c309e9434208e5789108f241ad43a4b
            {noformat}

            -- All 4 handlers above are not undeployed.
            System Test:

            Eventing handlers deployment{
            {noformat}
            [2021-03-17T16:06:53-07:00, sequoiatools/eventing:7.0:a87aaf] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers
            [2021-03-17T16:07:01-07:00, sequoiatools/eventing:7.0:dfee66] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql
            [2021-03-17T16:07:08-07:00, sequoiatools/eventing:7.0:dd9a54] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm
            [2021-03-17T16:07:17-07:00, sequoiatools/eventing:7.0:ef5686] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl
            [2021-03-17T16:07:25-07:00, sequoiatools/eventing:7.0:b3959c] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o deploy
            [2021-03-17T16:07:30-07:00, sequoiatools/eventing:7.0:d2d34d] eventing_helper.py -i 172.23.105.183 -u Administrator -p password -o wait_for_state --state deployed
            {noformat}
            -- At this point of time there should be not data in collections.

            Current step
            {noformat}
            [2021-03-17T21:13:59-07:00, sequoiatools/couchbase-cli:7.0:e63846] server-add -c 172.23.104.232:8091 --server-add https://172.23.104.244 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
            [2021-03-17T21:14:11-07:00, sequoiatools/couchbase-cli:7.0:d5525c] rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password


            Error occurred on container - sequoiatools/couchbase-cli:7.0:[rebalance -c 172.23.104.232:8091 --server-remove 172.23.105.25 -u Administrator -p password]

            docker logs d5525c
            docker start d5525c

            *Unable to display progress bar on this os
            JERROR: Rebalance failed. See logs for detailed reason. You can try again.
            {noformat}

            Rebalance Failed -

            {noformat}
            Rebalance exited with reason {service_rebalance_failed,eventing,
            {worker_died,
            {'EXIT',<0.1247.496>,
            {{badmatch,
            {error,
            {bad_nodes,eventing,prepare_rebalance,
            [{'ns_1@172.23.104.214',
            {error,
            {unknown_error,
            <<"Some apps are deploying or resuming on nodeId: d0c98164b79fbb3e57b4808d7c71ef3b Apps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]">>}}}]}}},
            [{service_rebalancer,rebalance_worker,1,
            [{file,"src/service_rebalancer.erl"},
            {line,164}]},
            {proc_lib,init_p,3,
            [{file,"proc_lib.erl"},{line,234}]}]}}}}.
            Rebalance Operation Id = 6c309e9434208e5789108f241ad43a4b
            {noformat}

            -- All 4 handlers above are undeployed.
            jeelan.poola Jeelan Poola made changes -
            Assignee Jeelan Poola [ jeelan.poola ] Ankit Prabhu [ ankit.prabhu ]
            ankit.prabhu Ankit Prabhu made changes -
            Attachment eventing_pprof.log [ 131500 ]
            Attachment goroutine5.out [ 131501 ]

            timer_0 function is stuck in bootstrapping.
            Looking at eventing logs it is trying to open the connection with the metadata bucket using gocb but its receiving timeout in WaitUntilReady operation.

            1793:2021-03-17T16:08:38.945-07:00 [Error] Consumer::gocbConnectMetaBucketCallback [worker_timers_0_0:1] Failed to connect to metadata bucket ITEM (bucket got deleted?) , err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000155810,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}
            

            Looking at the pprof looks like its stuck in closing the connection which blocked deployment of the function.

            1 @ 0x93b320 0x90fb48 0x90fb1e 0x90f80b 0xd5ad86 0xd60d4b 0xdd5f42 0xdd75f4 0x1142c06 0xe412ba 0x11312f6 0xe656e3 0x969351
            #       0xd5ad85        github.com/couchbase/gocbcore/v9.(*Agent).Close+0xc5                            /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocbcore/v9/agent.go:495
            #       0xd60d4a        github.com/couchbase/gocbcore/v9.(*AgentGroup).Close+0x11a                      /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocbcore/v9/agentgroup.go:115
            #       0xdd5f41        github.com/couchbase/gocb/v2.(*stdConnectionMgr).close+0x141                    /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocb/v2/client.go:262
            #       0xdd75f3        github.com/couchbase/gocb/v2.(*Cluster).Close+0xf3                              /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocb/v2/cluster.go:379
            #       0x1142c05       github.com/couchbase/eventing/consumer.glob..func2+0x4d5                        /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/consumer/bucket_ops.go:89
            #       0xe412b9        github.com/couchbase/eventing/util.Retry+0x129                                  /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/util/retry.go:65
            #       0x11312f5       github.com/couchbase/eventing/consumer.(*Consumer).Serve+0x5d5                  /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/consumer/v8_consumer.go:196
            #       0xe656e2        github.com/couchbase/eventing/suptree.(*Supervisor).runService.func1+0x72       /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/suptree/supervisor.go:413
            

            There is no more message after 16:08:38 so it looks like its stuck from that period.
            https://github.com/couchbase/gocbcore/blob/e48d03a40861100a01753e1277952abaa0bce343/agent.go#L495
            Could someone from gocb team take a look and check why its stuck in closing the connection and also the timeout?
            goroutine dump: eventing_pprof.log goroutine5.out

            ankit.prabhu Ankit Prabhu added a comment - timer_0 function is stuck in bootstrapping. Looking at eventing logs it is trying to open the connection with the metadata bucket using gocb but its receiving timeout in WaitUntilReady operation. 1793:2021-03-17T16:08:38.945-07:00 [Error] Consumer::gocbConnectMetaBucketCallback [worker_timers_0_0:1] Failed to connect to metadata bucket ITEM (bucket got deleted?) , err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000155810,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""} Looking at the pprof looks like its stuck in closing the connection which blocked deployment of the function. 1 @ 0x93b320 0x90fb48 0x90fb1e 0x90f80b 0xd5ad86 0xd60d4b 0xdd5f42 0xdd75f4 0x1142c06 0xe412ba 0x11312f6 0xe656e3 0x969351 # 0xd5ad85 github.com/couchbase/gocbcore/v9.(*Agent).Close+0xc5 /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocbcore/v9/agent.go:495 # 0xd60d4a github.com/couchbase/gocbcore/v9.(*AgentGroup).Close+0x11a /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocbcore/v9/agentgroup.go:115 # 0xdd5f41 github.com/couchbase/gocb/v2.(*stdConnectionMgr).close+0x141 /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocb/v2/client.go:262 # 0xdd75f3 github.com/couchbase/gocb/v2.(*Cluster).Close+0xf3 /tmp/workspace/toy-unix-simple/godeps/src/github.com/couchbase/gocb/v2/cluster.go:379 # 0x1142c05 github.com/couchbase/eventing/consumer.glob..func2+0x4d5 /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/consumer/bucket_ops.go:89 # 0xe412b9 github.com/couchbase/eventing/util.Retry+0x129 /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/util/retry.go:65 # 0x11312f5 github.com/couchbase/eventing/consumer.(*Consumer).Serve+0x5d5 /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/consumer/v8_consumer.go:196 # 0xe656e2 github.com/couchbase/eventing/suptree.(*Supervisor).runService.func1+0x72 /tmp/workspace/toy-unix-simple/goproj/src/github.com/couchbase/eventing/suptree/supervisor.go:413 There is no more message after 16:08:38 so it looks like its stuck from that period. https://github.com/couchbase/gocbcore/blob/e48d03a40861100a01753e1277952abaa0bce343/agent.go#L495 Could someone from gocb team take a look and check why its stuck in closing the connection and also the timeout? goroutine dump: eventing_pprof.log goroutine5.out

            seeing this is on another cluster - none of the handlers are deployed - Ankit Prabhu - Can you please review these set of logs too

            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.106.134.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.58.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.73.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.74.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.75.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.77.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.81.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.86.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.121.77.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.24.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.25.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.26.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.31.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.32.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.33.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.122.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.243.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.254.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.48.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.105.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.110.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.112.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.148.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.149.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.150.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.151.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.241.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.74.zip

            ritam.sharma Ritam Sharma added a comment - seeing this is on another cluster - none of the handlers are deployed - Ankit Prabhu - Can you please review these set of logs too https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.106.134.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.58.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.73.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.74.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.75.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.77.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.81.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.120.86.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.121.77.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.24.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.25.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.26.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.31.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.32.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.123.33.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.122.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.243.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.254.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.96.48.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.105.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.110.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.112.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.148.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.149.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.150.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.151.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.241.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053/collectinfo-2021-03-18T070059-ns_1%40172.23.97.74.zip
            ankit.prabhu Ankit Prabhu made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Brett Lawson [ brett19 ]

            Its a same issue. On node 172.23.120.58 eventing received unambiguous timeout from gocb.

            2021-03-17T22:08:20.327-07:00 [Error] util::IsSyncGatewayEnabled OpenBucket failed for bucket: WAREHOUSE, err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000193958,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}2021-03-17T22:08:42.181-07:00 [Error] util::IsSyncGatewayEnabled OpenBucket failed for bucket: WAREHOUSE, err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000186176,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}
            

            And from goroutine dump gocb is stuck in closing the cluster agent.

            1 @ 0x93b320 0x90fb48 0x90fb1e 0x90f80b 0xd5ae66 0xd60e2b 0xdd6022 0xdd76d4 0xe37d3c 0x11b2015 0x11a7d04 0x11be0dd 0xc00ac4 0xc0299d 0xc03f14 0xbff8b5 0x969351
            #       0xd5ae65        github.com/couchbase/gocbcore/v9.(*Agent).Close+0xc5                                    /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocbcore/v9/agent.go:495
            #       0xd60e2a        github.com/couchbase/gocbcore/v9.(*AgentGroup).Close+0x11a                              /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocbcore/v9/agentgroup.go:115
            #       0xdd6021        github.com/couchbase/gocb/v2.(*stdConnectionMgr).close+0x141                            /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocb/v2/client.go:262
            #       0xdd76d3        github.com/couchbase/gocb/v2.(*Cluster).Close+0xf3                                      /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocb/v2/cluster.go:379
            #       0xe37d3b        github.com/couchbase/eventing/util.IsSyncGatewayEnabled+0x74b                           /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/util/bucket_ops.go:205
            #       0x11b2014       github.com/couchbase/eventing/service_manager.(*ServiceMgr).savePrimaryStore+0x2144     /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:1912
            #       0x11a7d03       github.com/couchbase/eventing/service_manager.(*ServiceMgr).setSettings+0x1a13          /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:1415
            #       0x11be0dc       github.com/couchbase/eventing/service_manager.(*ServiceMgr).functionsHandler+0x533c     /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:2963
            #       0xc00ac3        net/http.HandlerFunc.ServeHTTP+0x43                                                     /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2007
            #       0xc0299c        net/http.(*ServeMux).ServeHTTP+0x1bc                                                    /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2387
            #       0xc03f13        net/http.serverHandler.ServeHTTP+0xa3                                                   /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2802
            #       0xbff8b4        net/http.(*conn).serve+0x874                                                            /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:1890
            

            Recently gocb moved to 2.2.2 from 2.1.5 and gocbcore to 9.1.3 from 9.0.7
            https://github.com/couchbase/manifest/commit/db19fbd7bd7d389213ff82fd026ff55d62dcdeb9

            ankit.prabhu Ankit Prabhu added a comment - Its a same issue. On node 172.23.120.58 eventing received unambiguous timeout from gocb. 2021-03-17T22:08:20.327-07:00 [Error] util::IsSyncGatewayEnabled OpenBucket failed for bucket: WAREHOUSE, err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000193958,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}2021-03-17T22:08:42.181-07:00 [Error] util::IsSyncGatewayEnabled OpenBucket failed for bucket: WAREHOUSE, err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000186176,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""} And from goroutine dump gocb is stuck in closing the cluster agent. 1 @ 0x93b320 0x90fb48 0x90fb1e 0x90f80b 0xd5ae66 0xd60e2b 0xdd6022 0xdd76d4 0xe37d3c 0x11b2015 0x11a7d04 0x11be0dd 0xc00ac4 0xc0299d 0xc03f14 0xbff8b5 0x969351 # 0xd5ae65 github.com/couchbase/gocbcore/v9.(*Agent).Close+0xc5 /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocbcore/v9/agent.go:495 # 0xd60e2a github.com/couchbase/gocbcore/v9.(*AgentGroup).Close+0x11a /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocbcore/v9/agentgroup.go:115 # 0xdd6021 github.com/couchbase/gocb/v2.(*stdConnectionMgr).close+0x141 /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocb/v2/client.go:262 # 0xdd76d3 github.com/couchbase/gocb/v2.(*Cluster).Close+0xf3 /home/couchbase/jenkins/workspace/couchbase-server-unix/godeps/src/github.com/couchbase/gocb/v2/cluster.go:379 # 0xe37d3b github.com/couchbase/eventing/util.IsSyncGatewayEnabled+0x74b /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/util/bucket_ops.go:205 # 0x11b2014 github.com/couchbase/eventing/service_manager.(*ServiceMgr).savePrimaryStore+0x2144 /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:1912 # 0x11a7d03 github.com/couchbase/eventing/service_manager.(*ServiceMgr).setSettings+0x1a13 /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:1415 # 0x11be0dc github.com/couchbase/eventing/service_manager.(*ServiceMgr).functionsHandler+0x533c /home/couchbase/jenkins/workspace/couchbase-server-unix/goproj/src/github.com/couchbase/eventing/service_manager/http_handlers.go:2963 # 0xc00ac3 net/http.HandlerFunc.ServeHTTP+0x43 /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2007 # 0xc0299c net/http.(*ServeMux).ServeHTTP+0x1bc /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2387 # 0xc03f13 net/http.serverHandler.ServeHTTP+0xa3 /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:2802 # 0xbff8b4 net/http.(*conn).serve+0x874 /home/couchbase/.cbdepscache/exploded/x86_64/go-1.13.7/go/src/net/http/server.go:1890 Recently gocb moved to 2.2.2 from 2.1.5 and gocbcore to 9.1.3 from 9.0.7 https://github.com/couchbase/manifest/commit/db19fbd7bd7d389213ff82fd026ff55d62dcdeb9
            jeelan.poola Jeelan Poola made changes -
            Component/s clients [ 10042 ]
            Component/s eventing [ 14026 ]

            I'm trying to piece together what is happening here. Is eventing seeing the WaitUntilReady call fail and then closing the gocb cluster object? The Close call is then blocking eventing from making any further progress? It's hard to say why the WaitUntilReady call is timing out as there aren't any logs to see what gocbcore is doing (unless i'm looking in the wrong place). Is it at all possible to run with gocb/gocbcore logging set to debug level?

            Regardless of why WaitUntilReady is timing out the cluster Close call should not be blocking indefinitely, the goroutine dump helps but again logs would likely be useful to work out what gocbcore is doing.

            charles.dixon Charles Dixon added a comment - I'm trying to piece together what is happening here. Is eventing seeing the WaitUntilReady call fail and then closing the gocb cluster object? The Close call is then blocking eventing from making any further progress? It's hard to say why the WaitUntilReady call is timing out as there aren't any logs to see what gocbcore is doing (unless i'm looking in the wrong place). Is it at all possible to run with gocb/gocbcore logging set to debug level? Regardless of why WaitUntilReady is timing out the cluster Close call should not be blocking indefinitely, the goroutine dump helps but again logs would likely be useful to work out what gocbcore is doing.
            ankit.prabhu Ankit Prabhu added a comment -

            Charles Dixon, Yes WaitUntilReady calls failed which resulted in eventing closing the cluster object(which will be retried). But this close called blocked which resulted in eventing not making any further progress(in this case stuck in bootstrap state of eventing function)

            1800:2021-03-17T16:08:41.244-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1801:2021-03-17T16:08:41.261-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1802:2021-03-17T16:08:41.271-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1803:2021-03-17T16:08:41.280-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1804:2021-03-17T16:08:41.289-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1806:2021-03-17T16:08:46.307-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1807:2021-03-17T16:08:46.320-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1808:2021-03-17T16:08:46.328-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1809:2021-03-17T16:08:46.335-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1810:2021-03-17T16:08:46.342-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1814:2021-03-17T16:08:51.361-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1815:2021-03-17T16:08:51.373-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1816:2021-03-17T16:08:51.382-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1817:2021-03-17T16:08:51.390-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1818:2021-03-17T16:08:51.399-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1824:2021-03-17T16:08:56.418-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]
            1825:2021-03-17T16:08:56.431-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591]

            2021-03-17T16:08:27.888-07:00 [Info] [gocb] SDK Version: gocbcore/v9.1.3
            2021-03-17T16:08:27.888-07:00 [Info] [gocb] Creating new agent: &{MemdAddrs:[172.23.104.245:11210 172.23.105.102:11210 172.23.105.93:11210 172.23.104.232:11210 172.23.105.86:11210 172.23.105.25:11210 172.23.105.29:11210 172.23.105.90:11210 172.23.105.112:11210 172.23.105.109:11210 172.23.104.165:11210] HTTPAddrs:[172.23.104.245:8091 172.23.105.102:8091 172.23.105.93:8091 172.23.104.232:8091 172.23.105.86:8091 172.23.105.25:8091 172.23.105.29:8091 172.23.105.90:8091 172.23.105.112:8091 172.23.105.109:8091 172.23.104.165:8091] BucketName:ITEM UserAgent:gocb/v2.2.2 UseTLS:false NetworkType: Auth:0xc009a9d5e0 TLSRootCAProvider:0xe1c190 UseMutationTokens:true UseCompression:false UseDurations:true DisableDecompression:false UseOutOfOrderResponses:true DisableXErrors:false DisableJSONHello:false DisableSyncReplicationHello:false UseCollections:true CompressionMinSize:0 CompressionMinRatio:0 HTTPRedialPeriod:0s HTTPRetryDelay:0s CccpMaxWait:0s CccpPollPeriod:0s ConnectTimeout:10s KVConnectTimeout:7s KvPoolSize:0 MaxQueueSize:0 HTTPMaxIdleConns:0 HTTPMaxIdleConnsPerHost:0 HTTPIdleConnectionTimeout:0s Tracer:0xc009a9d2d0 NoRootTraceSpans:true DefaultRetryStrategy:0xc009a9d2b0 CircuitBreakerConfig:{Enabled:true VolumeThreshold:0 ErrorThresholdPercentage:0 SleepWindow:0s RollingWindow:0s CompletionCallback:<nil> CanaryTimeout:0s} UseZombieLogger:true ZombieLoggerInterval:0s ZombieLoggerSampleSize:0 AuthMechanisms:[]}
            2021-03-17T16:08:32.942-07:00 [Error] Consumer::gocbConnectMetaBucketCallback [worker_timers_0_0:1] Failed to connect to metadata bucket ITEM (bucket got deleted?) , err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000133580,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}
            

            ankit.prabhu Ankit Prabhu added a comment - Charles Dixon , Yes WaitUntilReady calls failed which resulted in eventing closing the cluster object(which will be retried). But this close called blocked which resulted in eventing not making any further progress(in this case stuck in bootstrap state of eventing function) 1800:2021-03-17T16:08:41.244-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1801:2021-03-17T16:08:41.261-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1802:2021-03-17T16:08:41.271-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1803:2021-03-17T16:08:41.280-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1804:2021-03-17T16:08:41.289-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1806:2021-03-17T16:08:46.307-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1807:2021-03-17T16:08:46.320-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1808:2021-03-17T16:08:46.328-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1809:2021-03-17T16:08:46.335-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1810:2021-03-17T16:08:46.342-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1814:2021-03-17T16:08:51.361-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1815:2021-03-17T16:08:51.373-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1816:2021-03-17T16:08:51.382-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1817:2021-03-17T16:08:51.390-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1818:2021-03-17T16:08:51.399-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1824:2021-03-17T16:08:56.418-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 1825:2021-03-17T16:08:56.431-07:00 [Info] SuperSupervisor::BootstrapAppList [1] bootstrappingApps: map[timers_0:2021-03-17 16:07:27.988668711 -0700 PDT m=+2440.940195591] 2021-03-17T16:08:27.888-07:00 [Info] [gocb] SDK Version: gocbcore/v9.1.3 2021-03-17T16:08:27.888-07:00 [Info] [gocb] Creating new agent: &{MemdAddrs:[172.23.104.245:11210 172.23.105.102:11210 172.23.105.93:11210 172.23.104.232:11210 172.23.105.86:11210 172.23.105.25:11210 172.23.105.29:11210 172.23.105.90:11210 172.23.105.112:11210 172.23.105.109:11210 172.23.104.165:11210] HTTPAddrs:[172.23.104.245:8091 172.23.105.102:8091 172.23.105.93:8091 172.23.104.232:8091 172.23.105.86:8091 172.23.105.25:8091 172.23.105.29:8091 172.23.105.90:8091 172.23.105.112:8091 172.23.105.109:8091 172.23.104.165:8091] BucketName:ITEM UserAgent:gocb/v2.2.2 UseTLS:false NetworkType: Auth:0xc009a9d5e0 TLSRootCAProvider:0xe1c190 UseMutationTokens:true UseCompression:false UseDurations:true DisableDecompression:false UseOutOfOrderResponses:true DisableXErrors:false DisableJSONHello:false DisableSyncReplicationHello:false UseCollections:true CompressionMinSize:0 CompressionMinRatio:0 HTTPRedialPeriod:0s HTTPRetryDelay:0s CccpMaxWait:0s CccpPollPeriod:0s ConnectTimeout:10s KVConnectTimeout:7s KvPoolSize:0 MaxQueueSize:0 HTTPMaxIdleConns:0 HTTPMaxIdleConnsPerHost:0 HTTPIdleConnectionTimeout:0s Tracer:0xc009a9d2d0 NoRootTraceSpans:true DefaultRetryStrategy:0xc009a9d2b0 CircuitBreakerConfig:{Enabled:true VolumeThreshold:0 ErrorThresholdPercentage:0 SleepWindow:0s RollingWindow:0s CompletionCallback:<nil> CanaryTimeout:0s} UseZombieLogger:true ZombieLoggerInterval:0s ZombieLoggerSampleSize:0 AuthMechanisms:[]} 2021-03-17T16:08:32.942-07:00 [Error] Consumer::gocbConnectMetaBucketCallback [worker_timers_0_0:1] Failed to connect to metadata bucket ITEM (bucket got deleted?) , err: unambiguous timeout | {"InnerError":{"InnerError":{"InnerError":{},"Message":"unambiguous timeout"}},"OperationID":"WaitUntilReady","Opaque":"","TimeObserved":5000133580,"RetryReasons":["NOT_READY"],"RetryAttempts":10,"LastDispatchedTo":"","LastDispatchedFrom":"","LastConnectionID":""}

            Looking at the goroutine dump it looks like one of the connections has failed to auth and has reattempted the auth request (probably with a different method) and the connection is then waiting for that request to complete, which in turn is preventing the entire agent from shutting down (this could be that the goroutine dump has just caught the SDK at this precise moment too I think). Without more detailed logs I can't say if the response has somehow got lost in gocbcore or if it's got lost on the network (it should have timed out if it was lost on the network anyway).

            The WaitUntilReady timing out with retry reason of only [NOT_READY] and not [NOT_READY,CONNECTION_ERROR] suggests that at least one connection has failed to contact the relevant node or has somehow become blocked during bootstrap. If the connection was able to contact the node but failed to complete bootstrap (for example if the bucket doesn't exist at all) then we'd see the CONNECTION_ERROR reason. WaitUntilReady by default will wait for every connection to become available.

            charles.dixon Charles Dixon added a comment - Looking at the goroutine dump it looks like one of the connections has failed to auth and has reattempted the auth request (probably with a different method) and the connection is then waiting for that request to complete, which in turn is preventing the entire agent from shutting down (this could be that the goroutine dump has just caught the SDK at this precise moment too I think). Without more detailed logs I can't say if the response has somehow got lost in gocbcore or if it's got lost on the network (it should have timed out if it was lost on the network anyway). The WaitUntilReady timing out with retry reason of only  [NOT_READY] and not  [NOT_READY,CONNECTION_ERROR] suggests that at least one connection has failed to contact the relevant node or has somehow become blocked during bootstrap. If the connection was able to contact the node but failed to complete bootstrap (for example if the bucket doesn't exist at all) then we'd see the CONNECTION_ERROR reason. WaitUntilReady by default will wait for every connection to become available.

            Vikas Chaudhary could you rerun this debug logging enabled please?

            charles.dixon Charles Dixon added a comment - Vikas Chaudhary could you rerun this debug logging enabled please?
            charles.dixon Charles Dixon made changes -
            Assignee Brett Lawson [ brett19 ] Vikas Chaudhary [ vikas.chaudhary ]
            ritam.sharma Ritam Sharma added a comment -

            Charles Dixon- Can you please help with debugging steps, this is a system test env that works on dockers.

            ritam.sharma Ritam Sharma added a comment - Charles Dixon - Can you please help with debugging steps, this is a system test env that works on dockers.
            ritam.sharma Ritam Sharma made changes -
            Assignee Vikas Chaudhary [ vikas.chaudhary ] Charles Dixon [ charles.dixon ]

            Ritam Sharma I'm not aware of how to adjust debug levels of server components. Gocb does not provide an environment option for setting debug levels so it would have to be done from eventing. Ankit Prabhu can you assist with how to do this?

            charles.dixon Charles Dixon added a comment - Ritam Sharma I'm not aware of how to adjust debug levels of server components. Gocb does not provide an environment option for setting debug levels so it would have to be done from eventing. Ankit Prabhu can you assist with how to do this?
            charles.dixon Charles Dixon made changes -
            Assignee Charles Dixon [ charles.dixon ] Ankit Prabhu [ ankit.prabhu ]
            mihir.kamdar Mihir Kamdar (Inactive) made changes -
            Priority Critical [ 2 ] Blocker [ 1 ]
            mihir.kamdar Mihir Kamdar (Inactive) made changes -
            Priority Blocker [ 1 ] Test Blocker [ 6 ]
            mihir.kamdar Mihir Kamdar (Inactive) made changes -
            Labels system-test affects-cc-testing system-test

            Marking this as a test blocker, as it prevents the test from proceeding further without a manual intervention.

            mihir.kamdar Mihir Kamdar (Inactive) added a comment - Marking this as a test blocker, as it prevents the test from proceeding further without a manual intervention.
            ankit.prabhu Ankit Prabhu added a comment -

            Vikas Chaudhary Ritam Sharma changing function log level in setting will change the logs levels of the SDK.

            ankit.prabhu Ankit Prabhu added a comment - Vikas Chaudhary Ritam Sharma changing function log level in setting will change the logs levels of the SDK.
            ankit.prabhu Ankit Prabhu made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Vikas Chaudhary [ vikas.chaudhary ]
            ritam.sharma Ritam Sharma added a comment -

            Log levels are debug now:

            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.106.134.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.58.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.73.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.74.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.75.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.77.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.81.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.86.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.121.77.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.24.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.25.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.26.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.31.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.32.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.33.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.122.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.14.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.243.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.254.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.48.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.105.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.110.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.112.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.148.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.149.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.150.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.151.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.241.zip
            https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.74.zip

            ritam.sharma Ritam Sharma added a comment - Log levels are debug now: https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.106.134.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.58.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.73.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.74.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.75.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.77.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.81.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.120.86.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.121.77.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.24.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.25.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.26.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.31.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.32.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.123.33.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.122.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.14.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.243.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.254.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.96.48.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.105.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.110.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.112.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.148.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.149.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.150.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.151.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.241.zip https://cb-jira.s3.us-east-2.amazonaws.com/logs/MB-45053-01/collectinfo-2021-03-21T021758-ns_1%40172.23.97.74.zip
            ritam.sharma Ritam Sharma made changes -
            Assignee Vikas Chaudhary [ vikas.chaudhary ] Ankit Prabhu [ ankit.prabhu ]

            Latest logs from 7.0.0-4735 run:

            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.106.134.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.106.136.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.58.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.73.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.74.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.75.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.77.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.81.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.86.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.24.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.25.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.26.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.31.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.32.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.33.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.96.254.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.96.48.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.110.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.112.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.148.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.149.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.150.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.151.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.241.zip
            url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.74.zip

            arunkumar Arunkumar Senthilnathan (Inactive) added a comment - Latest logs from 7.0.0-4735 run: url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.106.134.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.106.136.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.58.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.73.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.74.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.75.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.77.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.81.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.120.86.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.24.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.25.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.26.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.31.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.32.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.123.33.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.96.254.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.96.48.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.110.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.112.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.148.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.149.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.150.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.151.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.241.zip url : https://cb-jira.s3.us-east-2.amazonaws.com/logs/systestmon-1616411900/collectinfo-2021-03-22T111822-ns_1%40172.23.97.74.zip

            Ankit Prabhu: did you discover anything in looking at the logs to pass along?

            Charles Dixon can you have a look at this with some priority? Per your request, they've re-run this with more detailed logging.

            ingenthr Matt Ingenthron added a comment - Ankit Prabhu : did you discover anything in looking at the logs to pass along? Charles Dixon can you have a look at this with some priority? Per your request, they've re-run this with more detailed logging.
            ingenthr Matt Ingenthron made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Charles Dixon [ charles.dixon ]

            Matt Ingenthron, i looked at the logs. Logs rolled over and there is no logs when it actually got stuck. I am making some changes to eventing logger so that logs won't roll over.

            ankit.prabhu Ankit Prabhu added a comment - Matt Ingenthron , i looked at the logs. Logs rolled over and there is no logs when it actually got stuck. I am making some changes to eventing logger so that logs won't roll over.

            Ankit Prabhu are you also able to move the gocbcore version being used to the latest gocbcore SHA? (https://github.com/couchbase/gocbcore/commit/341fc70dc8ba195416b4dc8f6d8599e391de686b) there were a couple of bugs introduced in 9.1.3 which are now fixed and may have an impact on this, such as https://issues.couchbase.com/browse/GOCBC-1073 .

            charles.dixon Charles Dixon added a comment - Ankit Prabhu are you also able to move the gocbcore version being used to the latest gocbcore SHA? ( https://github.com/couchbase/gocbcore/commit/341fc70dc8ba195416b4dc8f6d8599e391de686b) there were a couple of bugs introduced in 9.1.3 which are now fixed and may have an impact on this, such as https://issues.couchbase.com/browse/GOCBC-1073 .

            Seeing the same issue with functional tests as well. Weekly run is impacted due to this

            vikas.chaudhary Vikas Chaudhary added a comment - Seeing the same issue with functional tests as well. Weekly run is impacted due to this
            vikas.chaudhary Vikas Chaudhary made changes -
            Labels affects-cc-testing system-test affects-cc-testing functional-test system-test

            Many perf tests have failed, perf degradation has been observed and function deployment has hung due to this issue.

            prajwal.kirankumar Prajwal‌ Kiran Kumar‌ (Inactive) added a comment - Many perf tests have failed, perf degradation has been observed and function deployment has hung due to this issue.
            prajwal.kirankumar Prajwal‌ Kiran Kumar‌ (Inactive) made changes -
            Labels affects-cc-testing functional-test system-test affects-cc-testing functional-test performance system-test
            charles.dixon Charles Dixon added a comment - - edited

            I have just raised https://issues.couchbase.com/browse/GOCBC-1075 which MIGHT lead to this behaviour. I'm not sure of the mechanism involved for the GOCBC to lead to this issue but the resulting behaviour of the bug possibly leading to waiting on a channel indefinitely does appear to match.

            charles.dixon Charles Dixon added a comment - - edited I have just raised https://issues.couchbase.com/browse/GOCBC-1075 which MIGHT lead to this behaviour. I'm not sure of the mechanism involved for the GOCBC to lead to this issue but the resulting behaviour of the bug possibly leading to waiting on a channel indefinitely does appear to match.

            I chatted with Ankit Prabhu and he said that he will create a toy build with the latest gocbcore fixes (including the above) and run the system test.

            charles.dixon Charles Dixon added a comment - I chatted with Ankit Prabhu and he said that he will create a toy build with the latest gocbcore fixes (including the above) and run the system test.
            charles.dixon Charles Dixon made changes -
            Assignee Charles Dixon [ charles.dixon ] Ankit Prabhu [ ankit.prabhu ]
            raju Raju Suravarjjala added a comment - Ankit Prabhu Any update? cc Jeelan Poola and Ritam Sharma

            Build Couchbase Server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build Couchbase Server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build Couchbase Server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build Couchbase Server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping
            jeelan.poola Jeelan Poola added a comment - - edited

            Temporary fixes (which are not yet released in any official gocb release) have been merged to master which should help unblock QE. But we can not resolve the ticket yet because we need to move gocb version used by eventing to a released version. Waiting for the same from SDK Team. Lowering the severity to Critical. Please raise it to blocker if the issue is still not fixed with build 4807.

            jeelan.poola Jeelan Poola added a comment - - edited Temporary fixes (which are not yet released in any official gocb release) have been merged to master which should help unblock QE. But we can not resolve the ticket yet because we need to move gocb version used by eventing to a released version. Waiting for the same from SDK Team. Lowering the severity to Critical. Please raise it to blocker if the issue is still not fixed with build 4807.
            jeelan.poola Jeelan Poola made changes -
            Priority Test Blocker [ 6 ] Critical [ 2 ]
            jeelan.poola Jeelan Poola made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Charles Dixon [ charles.dixon ]
            vikas.chaudhary Vikas Chaudhary added a comment - Jeelan Poola  No EE builds are available http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/4807/couchbase-server-enterprise-7.0.0-4807-centos7.x86_64.rpm  

            Build couchbase-server-7.0.0-4813 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4813 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4813 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4813 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping
            brett19 Brett Lawson made changes -
            Assignee Charles Dixon [ charles.dixon ] Jeelan Poola [ jeelan.poola ]
            jeelan.poola Jeelan Poola added a comment - - edited

            Brett Lawson Any particular reason why this is assigned to me? Do we have a released version of gocb with fixes in it? Intent is to leave this ticket assigned to SDK team until we have a released version of gocb with the necessary fixes. Once we have it, you may please update the same in eventing gomods files and resolve it.

            jeelan.poola Jeelan Poola added a comment - - edited Brett Lawson Any particular reason why this is assigned to me? Do we have a released version of gocb with fixes in it? Intent is to leave this ticket assigned to SDK team until we have a released version of gocb with the necessary fixes. Once we have it, you may please update the same in eventing gomods files and resolve it.
            jeelan.poola Jeelan Poola made changes -
            Assignee Jeelan Poola [ jeelan.poola ] Brett Lawson [ brett19 ]

            Hey Jeelan Poola,

            We are waiting on validation from Ankit Prabhu that the fixes Charlie provided have resolve the issue. I accidentally assigned to you rather than them.

            Cheers, Brett

            brett19 Brett Lawson added a comment - Hey Jeelan Poola , We are waiting on validation from Ankit Prabhu that the fixes Charlie provided have resolve the issue. I accidentally assigned to you rather than them. Cheers, Brett
            brett19 Brett Lawson made changes -
            Assignee Brett Lawson [ brett19 ] Ankit Prabhu [ ankit.prabhu ]
            ankit.prabhu Ankit Prabhu added a comment -

            Vikas Chaudhary, could you please confirm whether the issue is seen on latest build or not?

            ankit.prabhu Ankit Prabhu added a comment - Vikas Chaudhary , could you please confirm whether the issue is seen on latest build or not?
            ankit.prabhu Ankit Prabhu made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Vikas Chaudhary [ vikas.chaudhary ]
            ritam.sharma Ritam Sharma added a comment -

            Ankit Prabhu - Cannot confirm on latest build, since there are blocker issues with eventing.

            ritam.sharma Ritam Sharma added a comment - Ankit Prabhu - Cannot confirm on latest build, since there are blocker issues with eventing.
            ritam.sharma Ritam Sharma made changes -
            Assignee Vikas Chaudhary [ vikas.chaudhary ] Ankit Prabhu [ ankit.prabhu ]

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping

            Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message:
            MB-45053: Upgrade gocb@2.2.2 + fixes

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 83aa8bf with commit message: MB-45053 : Upgrade gocb@2.2.2 + fixes

            Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message:
            MB-45053: Signal bootstrap finish when scope/collection gets deleted during bootstraping

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4807 contains eventing commit 50bf671 with commit message: MB-45053 : Signal bootstrap finish when scope/collection gets deleted during bootstraping
            lynn.straus Lynn Straus made changes -
            Assignee Ankit Prabhu [ ankit.prabhu ] Pablo Silberkasten [ JIRAUSER25235 ]

            As per discussion with Vikas Chaudhary, this can be closed out if longevity runs with eventing for 2 iterations and does not hit this issue - currently longevity is blocked on MB-45459

            arunkumar Arunkumar Senthilnathan (Inactive) added a comment - - edited As per discussion with Vikas Chaudhary , this can be closed out if longevity runs with eventing for 2 iterations and does not hit this issue - currently longevity is blocked on MB-45459
            arunkumar Arunkumar Senthilnathan (Inactive) made changes -
            Assignee Pablo Silberkasten [ JIRAUSER25235 ] Arunkumar Senthilnathan [ arunkumar ]

            Build couchbase-server-7.0.0-4881 contains eventing commit 1ea70cd with commit message:
            MB-45053: Revert gocb to 2.1.5

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4881 contains eventing commit 1ea70cd with commit message: MB-45053 : Revert gocb to 2.1.5
            arunkumar Arunkumar Senthilnathan (Inactive) added a comment - - edited http://172.23.109.231/job/centos-systest-launcher/2362/console running with 7.0.0-4910

            Issue not seen in run with 7.0.0-4910

            arunkumar Arunkumar Senthilnathan (Inactive) added a comment - Issue not seen in run with 7.0.0-4910
            arunkumar Arunkumar Senthilnathan (Inactive) made changes -
            Resolution Fixed [ 1 ]
            Status Open [ 1 ] Closed [ 6 ]
            james.lee James Lee made changes -
            Link This issue relates to MB-45722 [ MB-45722 ]

            Build couchbase-server-7.0.0-4964 contains eventing commit efcb129 with commit message:
            Revert "MB-45053: Upgrade gocb@2.2.2 + fixes"

            build-team Couchbase Build Team added a comment - Build couchbase-server-7.0.0-4964 contains eventing commit efcb129 with commit message: Revert " MB-45053 : Upgrade gocb@2.2.2 + fixes"
            lynn.straus Lynn Straus made changes -
            Fix Version/s 7.0.0 [ 17233 ]
            lynn.straus Lynn Straus made changes -
            Fix Version/s Cheshire-Cat [ 15915 ]

            People

              arunkumar Arunkumar Senthilnathan (Inactive)
              ritam.sharma Ritam Sharma
              Votes:
              0 Vote for this issue
              Watchers:
              16 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  PagerDuty