Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-46067

[Collections] - Rebalance in fails with Rebalance exited with reason {buckets_cleanup_failed,

    XMLWordPrintable

Details

    • Untriaged
    • Centos 64-bit
    • 1
    • Yes

    Description

      Script to Repro

      guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops.ini rerun=False,get-cbcollect-info=True,quota_percent=95,crash_warning=True,GROUP=rebalance_with_collection_crud_durability_MAJORITY_AND_PERSIST_TO_ACTIVE,rerun=False -t bucket_collections.collections_rebalance.CollectionsRebalance.test_rebalance_cycles,nodes_init=4,nodes_in=2,durability=MAJORITY_AND_PERSIST_TO_ACTIVE,replicas=2,bucket_spec=single_bucket.default,num_items=10000,bulk_api_crud=True,GROUP=rebalance_with_collection_crud_durability_MAJORITY_AND_PERSIST_TO_ACTIVE'
      

      Steps to Repro
      1. Create a 4 node cluster
      2021-05-02 23:56:37,022 | test | INFO | pool-7-thread-6 | [table_view:display:72] Rebalance Overview
      ----------------------------------------------------------------------

      Nodes Services Version CPU Status

      ----------------------------------------------------------------------

      172.23.98.196 kv 7.0.0-5085-enterprise 6.98260650366 Cluster node
      172.23.98.195 None     <--- IN —
      172.23.121.10 None     <--- IN —
      172.23.104.186 None     <--- IN —

      ----------------------------------------------------------------------

      2. Create bucket/scope/collections/data.
      2021-05-02 23:57:55,855 | test | INFO | MainThread | [table_view:display:72] Bucket statistics
      ------------------------------------------------------------------------------

      Bucket Type Replicas Durability TTL Items RAM Quota RAM Used Disk Used

      ------------------------------------------------------------------------------

      VG-52-682000 couchbase 2 none 0 10000 10825498624 103923744 178484619

      ------------------------------------------------------------------------------

      3. Start crud on collections + durability data load.

      4. Start a rebalance in
      2021-05-02 23:58:10,523 | test | INFO | pool-7-thread-17 | [table_view:display:72] Rebalance Overview
      ----------------------------------------------------------------------

      Nodes Services Version CPU Status

      ----------------------------------------------------------------------

      172.23.98.196 kv 7.0.0-5085-enterprise 9.64402928553 Cluster node
      172.23.98.195 kv 7.0.0-5085-enterprise 17.5447441391 Cluster node
      172.23.104.186 kv 7.0.0-5085-enterprise 10.2467270896 Cluster node
      172.23.121.10 kv 7.0.0-5085-enterprise 11.4379913771 Cluster node
      172.23.120.206 None     <--- IN —

      ----------------------------------------------------------------------

      Rebalance in fails as shown below.

      2021-05-02 23:58:30,848 | test  | ERROR   | pool-7-thread-17 | [rest_client:_rebalance_status_and_progress:1510] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'072936ba1c3c193c7d8610c56757bedb', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=1ae5320e1248720433ca8b05e521ff96', u'status': u'notRunning'} - rebalance failed
      2021-05-02 23:58:31,163 | test  | INFO    | pool-7-thread-17 | [rest_client:print_UI_logs:2611] Latest logs from UI on 172.23.98.196:
      2021-05-02 23:58:31,164 | test  | ERROR   | pool-7-thread-17 | [rest_client:print_UI_logs:2613] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.98.196', u'tstamp': 1620025105878L, u'shortText': u'message', u'serverTime': u'2021-05-02T23:58:25.878Z', u'text': u"Rebalance exited with reason {buckets_cleanup_failed,['ns_1@172.23.104.186']}.\nRebalance Operation Id = 0c7edaeac725974d21c90107c8baf059"}
      

      This is not consistently reproducible. I tried running many times, no luck reproing so far.
      This was not seen on last weekly run we had on 7.0.0-5017.

      cbcollect_info attached.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            dfinlay Dave Finlay added a comment -

            Dupe of MB-46099. Will keep that ticket as it's the most recent - and also interesting as it's being in on the 130 node cluster.

            dfinlay Dave Finlay added a comment - Dupe of MB-46099 . Will keep that ticket as it's the most recent - and also interesting as it's being in on the 130 node cluster.
            owend Daniel Owen added a comment -

            Looks to be a duplicate of MB-45594.

            Will let ns_server confirm.

            owend Daniel Owen added a comment - Looks to be a duplicate of MB-45594 . Will let ns_server confirm.
            owend Daniel Owen added a comment -

            ns_server does not appear to be deleting the vbs (like it did earlier)

            [ns_server:info,2021-05-02T22:03:00.094-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.1215.2>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true)
            [ns_server:debug,2021-05-02T22:03:00.136-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.1215.2>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions
            [ns_server:info,2021-05-02T22:03:02.035-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.20069.2>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true)
            [ns_server:debug,2021-05-02T22:03:02.104-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.20069.2>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions
            [ns_server:info,2021-05-02T22:03:03.926-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6424.3>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true)
            [ns_server:debug,2021-05-02T22:03:03.974-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6424.3>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions
            [ns_server:info,2021-05-02T22:23:34.322-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.5937.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true)
            [ns_server:info,2021-05-02T22:24:57.189-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.5211.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true)
            [ns_server:info,2021-05-02T22:26:25.273-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6875.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true)
            [ns_server:info,2021-05-02T23:07:15.524-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.5296.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true)
            [ns_server:info,2021-05-02T23:08:27.468-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.4763.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true)
            [ns_server:info,2021-05-02T23:09:46.088-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6301.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true)
            [ns_server:info,2021-05-02T23:38:49.935-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.7110.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true)
            [ns_server:info,2021-05-02T23:39:32.331-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.4978.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true)
            [ns_server:info,2021-05-02T23:40:20.027-07:00,ns_1@172.23.104.186:ns_memcached-default<0.5606.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true)
            
            

            owend Daniel Owen added a comment - ns_server does not appear to be deleting the vbs (like it did earlier) [ns_server:info,2021-05-02T22:03:00.094-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.1215.2>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true) [ns_server:debug,2021-05-02T22:03:00.136-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.1215.2>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions [ns_server:info,2021-05-02T22:03:02.035-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.20069.2>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true) [ns_server:debug,2021-05-02T22:03:02.104-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.20069.2>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions [ns_server:info,2021-05-02T22:03:03.926-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6424.3>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true) [ns_server:debug,2021-05-02T22:03:03.974-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6424.3>:ns_memcached:delete_bucket:849]Proceeding into vbuckets dbs deletions [ns_server:info,2021-05-02T22:23:34.322-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.5937.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true) [ns_server:info,2021-05-02T22:24:57.189-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.5211.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true) [ns_server:info,2021-05-02T22:26:25.273-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6875.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true) [ns_server:info,2021-05-02T23:07:15.524-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.5296.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true) [ns_server:info,2021-05-02T23:08:27.468-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.4763.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true) [ns_server:info,2021-05-02T23:09:46.088-07:00,ns_1@172.23.104.186:ns_memcached-default<0.6301.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true) [ns_server:info,2021-05-02T23:38:49.935-07:00,ns_1@172.23.104.186:ns_memcached-bucket1<0.7110.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket1" from memcached (force = true) [ns_server:info,2021-05-02T23:39:32.331-07:00,ns_1@172.23.104.186:ns_memcached-bucket2<0.4978.0>:ns_memcached:delete_bucket:838]Deleting bucket "bucket2" from memcached (force = true) [ns_server:info,2021-05-02T23:40:20.027-07:00,ns_1@172.23.104.186:ns_memcached-default<0.5606.0>:ns_memcached:delete_bucket:838]Deleting bucket "default" from memcached (force = true)
            owend Daniel Owen added a comment -

            On node 172.23.104.186 in the memcached.log we see bucket1, bucket2 and default all get successfully deleted.

            2021-05-02T23:38:49.936165-07:00 INFO 68: Delete bucket [bucket1]. Notifying engine
            2021-05-02T23:38:49.937503-07:00 INFO 68: Delete bucket [bucket1]. Engine ready for shutdown
            2021-05-02T23:38:49.937535-07:00 INFO 68: Delete bucket [bucket1]. Wait for 254 clients to disconnect
            2021-05-02T23:38:49.952349-07:00 INFO 68: Delete bucket [bucket1]. Shut down the bucket
            2021-05-02T23:38:50.003467-07:00 INFO (bucket1) Deleted KvBucket.
            2021-05-02T23:38:50.004662-07:00 INFO (No Engine) Deleted dcpConnMap_
            2021-05-02T23:38:50.005022-07:00 INFO 68: Delete bucket [bucket1]. Clean up allocated resources 
            2021-05-02T23:38:50.005103-07:00 INFO 68: Delete bucket [bucket1] complete
            2021-05-02T23:39:32.335783-07:00 INFO 47: Delete bucket [bucket2]. Notifying engine
            2021-05-02T23:39:32.342287-07:00 INFO 47: Delete bucket [bucket2]. Engine ready for shutdown
            2021-05-02T23:39:32.342329-07:00 INFO 47: Delete bucket [bucket2]. Wait for 254 clients to disconnect
            2021-05-02T23:39:32.360813-07:00 INFO 47: Delete bucket [bucket2]. Shut down the bucket
            2021-05-02T23:39:32.372822-07:00 INFO (bucket2) Deleted KvBucket.
            2021-05-02T23:39:32.373596-07:00 INFO (No Engine) Deleted dcpConnMap_
            2021-05-02T23:39:32.373986-07:00 INFO 47: Delete bucket [bucket2]. Clean up allocated resources 
            2021-05-02T23:39:32.374037-07:00 INFO 47: Delete bucket [bucket2] complete
            2021-05-02T23:40:20.028033-07:00 INFO 56: Delete bucket [default]. Notifying engine
            2021-05-02T23:40:20.028488-07:00 INFO 56: Delete bucket [default]. Engine ready for shutdown
            2021-05-02T23:40:20.028504-07:00 INFO 56: Delete bucket [default]. Wait for 254 clients to disconnect
            2021-05-02T23:40:20.041362-07:00 INFO 56: Delete bucket [default]. Shut down the bucket
            2021-05-02T23:40:20.088584-07:00 INFO (default) Deleted KvBucket.
            2021-05-02T23:40:20.089495-07:00 INFO (No Engine) Deleted dcpConnMap_
            2021-05-02T23:40:20.095335-07:00 INFO 56: Delete bucket [default]. Clean up allocated resources 
            2021-05-02T23:40:20.095429-07:00 INFO 56: Delete bucket [default] complete
            

            However in the ns_server.debug.log we see the following error reported

            [ns_server:info,2021-05-02T23:58:25.879-07:00,ns_1@172.23.104.186:rebalance_agent<0.4030.0>:rebalance_agent:handle_down:290]Rebalancer process <22113.15060.0> died (reason {buckets_cleanup_failed,
                                                             ['ns_1@172.23.104.186']}).
            

            On 172.23.98.196 we also see

            [rebalance:error,2021-05-02T23:58:25.876-07:00,ns_1@172.23.98.196:<0.15060.0>:ns_rebalancer:maybe_cleanup_old_buckets:941]Failed to cleanup old buckets on node 'ns_1@172.23.104.186': {badrpc,
                                                                          {'EXIT',
                                                                           timeout}}
            

            owend Daniel Owen added a comment - On node 172.23.104.186 in the memcached.log we see bucket1, bucket2 and default all get successfully deleted. 2021-05-02T23:38:49.936165-07:00 INFO 68: Delete bucket [bucket1]. Notifying engine 2021-05-02T23:38:49.937503-07:00 INFO 68: Delete bucket [bucket1]. Engine ready for shutdown 2021-05-02T23:38:49.937535-07:00 INFO 68: Delete bucket [bucket1]. Wait for 254 clients to disconnect 2021-05-02T23:38:49.952349-07:00 INFO 68: Delete bucket [bucket1]. Shut down the bucket 2021-05-02T23:38:50.003467-07:00 INFO (bucket1) Deleted KvBucket. 2021-05-02T23:38:50.004662-07:00 INFO (No Engine) Deleted dcpConnMap_ 2021-05-02T23:38:50.005022-07:00 INFO 68: Delete bucket [bucket1]. Clean up allocated resources 2021-05-02T23:38:50.005103-07:00 INFO 68: Delete bucket [bucket1] complete 2021-05-02T23:39:32.335783-07:00 INFO 47: Delete bucket [bucket2]. Notifying engine 2021-05-02T23:39:32.342287-07:00 INFO 47: Delete bucket [bucket2]. Engine ready for shutdown 2021-05-02T23:39:32.342329-07:00 INFO 47: Delete bucket [bucket2]. Wait for 254 clients to disconnect 2021-05-02T23:39:32.360813-07:00 INFO 47: Delete bucket [bucket2]. Shut down the bucket 2021-05-02T23:39:32.372822-07:00 INFO (bucket2) Deleted KvBucket. 2021-05-02T23:39:32.373596-07:00 INFO (No Engine) Deleted dcpConnMap_ 2021-05-02T23:39:32.373986-07:00 INFO 47: Delete bucket [bucket2]. Clean up allocated resources 2021-05-02T23:39:32.374037-07:00 INFO 47: Delete bucket [bucket2] complete 2021-05-02T23:40:20.028033-07:00 INFO 56: Delete bucket [default]. Notifying engine 2021-05-02T23:40:20.028488-07:00 INFO 56: Delete bucket [default]. Engine ready for shutdown 2021-05-02T23:40:20.028504-07:00 INFO 56: Delete bucket [default]. Wait for 254 clients to disconnect 2021-05-02T23:40:20.041362-07:00 INFO 56: Delete bucket [default]. Shut down the bucket 2021-05-02T23:40:20.088584-07:00 INFO (default) Deleted KvBucket. 2021-05-02T23:40:20.089495-07:00 INFO (No Engine) Deleted dcpConnMap_ 2021-05-02T23:40:20.095335-07:00 INFO 56: Delete bucket [default]. Clean up allocated resources 2021-05-02T23:40:20.095429-07:00 INFO 56: Delete bucket [default] complete However in the ns_server.debug.log we see the following error reported [ns_server:info,2021-05-02T23:58:25.879-07:00,ns_1@172.23.104.186:rebalance_agent<0.4030.0>:rebalance_agent:handle_down:290]Rebalancer process <22113.15060.0> died (reason {buckets_cleanup_failed, ['ns_1@172.23.104.186']}). On 172.23.98.196 we also see [rebalance:error,2021-05-02T23:58:25.876-07:00,ns_1@172.23.98.196:<0.15060.0>:ns_rebalancer:maybe_cleanup_old_buckets:941]Failed to cleanup old buckets on node 'ns_1@172.23.104.186': {badrpc, {'EXIT', timeout}}
            owend Daniel Owen added a comment -

            On node 172.23.98.196 we see the rebalance start and fail

            [user:info,2021-05-02T23:58:10.455-07:00,ns_1@172.23.98.196:<0.806.0>:ns_orchestrator:idle:773]Starting rebalance, KeepNodes = ['ns_1@172.23.98.196','ns_1@172.23.98.195',
                                             'ns_1@172.23.104.186','ns_1@172.23.120.206',
                                             'ns_1@172.23.121.10'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 0c7edaeac725974d21c90107c8baf059
            ..
            ..
            ..
            [user:error,2021-05-02T23:58:25.878-07:00,ns_1@172.23.98.196:<0.806.0>:ns_orchestrator:log_rebalance_completion:1405]Rebalance exited with reason {buckets_cleanup_failed,['ns_1@172.23.104.186']}.
            Rebalance Operation Id = 0c7edaeac725974d21c90107c8baf059
            [ns_server:debug,2021-05-02T23:58:25.879-07:00,ns_1@172.23.98.196:<0.806.0>:auto_rebalance:retry_rebalance:58]Retry rebalance is not enabled. Failed Rebalance with Id 0c7edaeac725974d21c90107c8baf059 will not be retried.
            

            owend Daniel Owen added a comment - On node 172.23.98.196 we see the rebalance start and fail [user:info,2021-05-02T23:58:10.455-07:00,ns_1@172.23.98.196:<0.806.0>:ns_orchestrator:idle:773]Starting rebalance, KeepNodes = ['ns_1@172.23.98.196','ns_1@172.23.98.195', 'ns_1@172.23.104.186','ns_1@172.23.120.206', 'ns_1@172.23.121.10'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 0c7edaeac725974d21c90107c8baf059 .. .. .. [user:error,2021-05-02T23:58:25.878-07:00,ns_1@172.23.98.196:<0.806.0>:ns_orchestrator:log_rebalance_completion:1405]Rebalance exited with reason {buckets_cleanup_failed,['ns_1@172.23.104.186']}. Rebalance Operation Id = 0c7edaeac725974d21c90107c8baf059 [ns_server:debug,2021-05-02T23:58:25.879-07:00,ns_1@172.23.98.196:<0.806.0>:auto_rebalance:retry_rebalance:58]Retry rebalance is not enabled. Failed Rebalance with Id 0c7edaeac725974d21c90107c8baf059 will not be retried.
            ritam.sharma Ritam Sharma added a comment - - edited

            it is mostly deleting the collections

            2021-05-02 23:58:10,539 | test  | INFO    | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0
            *2021-05-02 23:58:11,003 | test  | INFO    | MainThread | [common_lib:sleep:22] Sleep 10 seconds. Reason: wait before dropping collections using bulk api*
            2021-05-02 23:58:15,638 | test  | INFO    | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0
            2021-05-02 23:58:20,710 | test  | INFO    | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0
            2021-05-02 23:58:25,783 | test  | INFO    | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0
            2021-05-02 23:58:30,848 | test  | ERROR   | pool-7-thread-17 | [rest_client:_rebalance_status_and_progress:1510] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'072936ba1c3c193c7d8610c56757bedb', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=1ae5320e1248720433ca8b05e521ff96', u'status': u'notRunning'} - rebalance failed
            

            ritam.sharma Ritam Sharma added a comment - - edited it is mostly deleting the collections 2021-05-02 23:58:10,539 | test | INFO | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0 *2021-05-02 23:58:11,003 | test | INFO | MainThread | [common_lib:sleep:22] Sleep 10 seconds. Reason: wait before dropping collections using bulk api* 2021-05-02 23:58:15,638 | test | INFO | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0 2021-05-02 23:58:20,710 | test | INFO | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0 2021-05-02 23:58:25,783 | test | INFO | pool-7-thread-17 | [task:check:355] Rebalance - status: running, progress: 0.0 2021-05-02 23:58:30,848 | test | ERROR | pool-7-thread-17 | [rest_client:_rebalance_status_and_progress:1510] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'072936ba1c3c193c7d8610c56757bedb', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=1ae5320e1248720433ca8b05e521ff96', u'status': u'notRunning'} - rebalance failed

            People

              dfinlay Dave Finlay
              Balakumaran.Gopal Balakumaran Gopal
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty