Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-48215

[ARM] backup service related rebalance failure observed in 4 node sanity

    XMLWordPrintable

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Not a Bug
    • Affects Version/s: Neo
    • Fix Version/s: Neo
    • Component/s: test-execution
    • Labels:
    • Triage:
      Untriaged
    • Story Points:
      1
    • Is this a Regression?:
      No

      Description

      7.1.0-1190

      Test:
      ./testrunner -i test_sanity.ini -p use_hostnames=true,get-cbcollect-info=True -t ent_backup_restore.enterprise_backup_restore_test.EnterpriseBackupRestoreTest.test_backup_restore_sanity,items=1000,reset_services=True

      daig.log:

      2021-08-27T16:24:20.936Z, ns_orchestrator:0:critical:message(ns_1@172.31.30.96) - Rebalance exited with reason {service_rebalance_failed,backup,
                                    {agent_died,<30280.5746.5>,
                                     {linked_process_died,<30280.7613.5>,
                                      {'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com',
                                       {{badmatch,
                                         {false,
                                          {topology,[],
                                           [<<"66dd46e037188b3db715618d7a28d33b">>,
                                            <<"7f1435314ca7bde12749503dd0f06fd2">>],
                                           true,[]},
                                          {topology,[],
                                           [<<"66dd46e037188b3db715618d7a28d33b">>],
                                           true,[]}}},
                                        [{service_agent,long_poll_worker_loop,5,
                                          [{file,"src/service_agent.erl"},
                                           {line,654}]},
                                         {proc_lib,init_p,3,
                                          [{file,"proc_lib.erl"},{line,234}]}]}}}}}.
      Rebalance Operation Id = 93f13a15e36925d125627dc2b3dcc372
      

      backup service log:

      2021-08-27T16:23:50.929Z INFO (Rebalance) Starting rebalance {"change": {"id":"3b7e8e31dd08bef5e2e8c8b6cfa2e32b","currentTopologyRev":null,"type":"topology-change-rebalance","keepNodes":[{"nodeInfo":{"nodeId":"66dd46e037188b3db715618d7a28d33b","priority":2,"opaque":{"grpc_port":9124,"host":"172.31.30.96","http_port":8097}},"recoveryType":"recovery-full"}],"ejectNodes":[{"nodeId":"7f1435314ca7bde12749503dd0f06fd2","priority":1,"opaque":{"grpc_port":9124,"host":"ec2-34-221-216-74.us-west-2.compute.amazonaws.com","http_port":8097}}]}}
      2021-08-27T16:23:50.931Z INFO (Rebalance) Got old leader {"leader": "66dd46e037188b3db715618d7a28d33b"}
      2021-08-27T16:23:50.933Z INFO (Rebalance) Got current nodes {"#nodes": 2}
      2021-08-27T16:23:50.933Z INFO (Rebalance) Setting self as leader
      2021-08-27T16:23:50.933Z INFO (Rebalance) Did the failover nodes will do eject node now {"eject nodes": [{"nodeId":"7f1435314ca7bde12749503dd0f06fd2","priority":1,"opaque":{"grpc_port":9124,"host":"ec2-34-221-216-74.us-west-2.compute.amazonaws.com","http_port":8097}}]}
      2021-08-27T16:23:50.934Z INFO (Rebalance) Removing node from service {"nodeID": "7f1435314ca7bde12749503dd0f06fd2"}
      2021-08-27T16:23:50.934Z DEBUG (Leader Manager) Received store event {"eventType": 2}
      2021-08-27T16:24:10.939Z DEBUG (Rebalance) Failed to establish connection with remove node {"nodeID": "7f1435314ca7bde12749503dd0f06fd2", "err": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 172.31.34.137:9124: i/o timeout\""}
      2021-08-27T16:24:20.938Z INFO (Service Manager) Cancel task {"id": "rebalance/3b7e8e31dd08bef5e2e8c8b6cfa2e32b"}
      2021-08-27T16:24:20.938Z INFO (Rebalance) Cancelling rebalance
      2021-08-27T16:24:31.940Z DEBUG (Rebalance) Failed to establish connection with remove node {"nodeID": "7f1435314ca7bde12749503dd0f06fd2", "err": "rpc error: code = Unavailable desc = connection error: desc = \"transport: Error while dialing dial tcp 172.31.34.137:9124: i/o timeout\""}
      2021-08-27T16:24:31.940Z ERROR (Rebalance) Could not confirm node was removed {"nodeID": "7f1435314ca7bde12749503dd0f06fd2", "err": "could not remove node '7f1435314ca7bde12749503dd0f06fd2': operation was cancelled"}
      2021-08-27T16:27:34.780Z INFO (Stats) Start repositories data size collection
      2021-08-27T16:27:34.781Z INFO (Stats) Stop repositories data size collection
      2021-08-27T16:32:34.783Z INFO (Stats) Start repositories data size collection
      2021-08-27T16:32:34.784Z INFO (Stats) Stop repositories data size collection
      

        Attachments

          Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

            Activity

            Hide
            james.lee James Lee added a comment -

            This is an interesting issue, I'm not 100% what's going on yet but I'll highlight a few things that are worth noting.

            The first node (172.31.30.96) has been restarted a few times:

            Starting node

            2021-08-27T16:21:41.064Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=40683928fc70bfab73f51dc74cc31af6 -public-address=ec2-54-68-236-52.us-west-2.compute.amazonaws.com -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091
            ...
            2021-08-27T16:22:14.847Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=66dd46e037188b3db715618d7a28d33b -public-address=ec2-54-68-236-52.us-west-2.compute.amazonaws.com -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091
            ...
            2021-08-27T16:22:34.750Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=66dd46e037188b3db715618d7a28d33b -public-address=172.31.30.96 -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091
            

            Interestingly:
            1) The node public address changes the last time the node is started ('ec2-54-68-236-52.us-west-2.compute.amazonaws.com' -> '172.31.30.96')
            2) The node-uuid changes from the first, to the second run ('40683928fc70bfab73f51dc74cc31af6' -> '66dd46e037188b3db715618d7a28d33b')

            At the time of the failure(s), this node ('66dd46e037188b3db715618d7a28d33b') appears to be the leader.

            Leader status

            2021-08-27T16:22:34.779Z INFO (Leader Manager) Stepped up as leader
            

            To give a basic timeline, it looks like:
            1) The node is (re)started
            2) Node steps up as the leader
            3) Node gets a topology change indicating the addition of a new node ('7f1435314ca7bde12749503dd0f06fd2')
            4) The leader times out when connecting to the new node
            5a) The leader begins retrying (this attempt also fails due to a timeout - which by default is 20s)
            5b) The rebalance is cancelled
            6) Rebalance is retriggered (this time we're removing the other node)
            7) The leader fails to connect to the other node (again due to a timeout)

            I'm not sure this is necessarily an issue with the rebalance logic, perhaps more how it behaves when the leader can't connect to the nodes it's told to use (by 'ns_server').

            Unfortunately, the logs for the second backup node aren't useful at all because they've been rotated; this is due to the storage observer spamming the logs with errors.

            MetaKV error

            2021-08-27T16:34:45.243Z WARN (Storage) MetaKV observer stopped {"err": "Get \"http://127.0.0.1:8091/_metakv/cbbs/?feed=continuous\": CBAuth database is stale: last reason: dial tcp 127.0.0.1:8091: connect: connection refused"}
            2021-08-27T16:34:45.243Z INFO (Storage) Starting MetaKV observer
            

            It's quite possible that this has something to do with the timeouts that we're seeing, however, at a cursory glance, I don't think this should be the case; it looks like the GRPC manager should be started before the leader component (i.e. the node should still be responsive to GRPC requests).

            Show
            james.lee James Lee added a comment - This is an interesting issue, I'm not 100% what's going on yet but I'll highlight a few things that are worth noting. The first node (172.31.30.96) has been restarted a few times: Starting node 2021-08-27T16:21:41.064Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=40683928fc70bfab73f51dc74cc31af6 -public-address=ec2-54-68-236-52.us-west-2.compute.amazonaws.com -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091 ... 2021-08-27T16:22:14.847Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=66dd46e037188b3db715618d7a28d33b -public-address=ec2-54-68-236-52.us-west-2.compute.amazonaws.com -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091 ... 2021-08-27T16:22:34.750Z INFO (Main) Running node version backup-7.1.0-1190- with options: -http-port=8097 -grpc-port=9124 -https-port=18097 -cert-path=/opt/couchbase/var/lib/couchbase/config/legacy_cert.pem -key-path=/opt/couchbase/var/lib/couchbase/config/pkey.pem -ipv4=required -ipv6=optional -cbm=/opt/couchbase/bin/cbbackupmgr -node-uuid=66dd46e037188b3db715618d7a28d33b -public-address=172.31.30.96 -admin-port=8091 -log-file=none -log-level=debug -integrated-mode -integrated-mode-host=http://127.0.0.1:8091 -secure-integrated-mode-host=https://127.0.0.1:18091 -integrated-mode-user=@backup -default-collect-logs-path=/opt/couchbase/var/lib/couchbase/tmp -cbauth-host=127.0.0.1:8091 Interestingly: 1) The node public address changes the last time the node is started (' ec2-54-68-236-52.us-west-2.compute.amazonaws.com ' -> ' 172.31.30.96 ') 2) The node-uuid changes from the first, to the second run (' 40683928fc70bfab73f51dc74cc31af6 ' -> ' 66dd46e037188b3db715618d7a28d33b ') At the time of the failure(s), this node (' 66dd46e037188b3db715618d7a28d33b ') appears to be the leader. Leader status 2021-08-27T16:22:34.779Z INFO (Leader Manager) Stepped up as leader To give a basic timeline, it looks like: 1) The node is (re)started 2) Node steps up as the leader 3) Node gets a topology change indicating the addition of a new node (' 7f1435314ca7bde12749503dd0f06fd2 ') 4) The leader times out when connecting to the new node 5a) The leader begins retrying (this attempt also fails due to a timeout - which by default is 20s) 5b) The rebalance is cancelled 6) Rebalance is retriggered (this time we're removing the other node) 7) The leader fails to connect to the other node (again due to a timeout) I'm not sure this is necessarily an issue with the rebalance logic, perhaps more how it behaves when the leader can't connect to the nodes it's told to use (by ' ns_server '). Unfortunately, the logs for the second backup node aren't useful at all because they've been rotated; this is due to the storage observer spamming the logs with errors. MetaKV error 2021-08-27T16:34:45.243Z WARN (Storage) MetaKV observer stopped {"err": "Get \"http://127.0.0.1:8091/_metakv/cbbs/?feed=continuous\": CBAuth database is stale: last reason: dial tcp 127.0.0.1:8091: connect: connection refused"} 2021-08-27T16:34:45.243Z INFO (Storage) Starting MetaKV observer It's quite possible that this has something to do with the timeouts that we're seeing, however, at a cursory glance, I don't think this should be the case; it looks like the GRPC manager should be started before the leader component (i.e. the node should still be responsive to GRPC requests).
            Hide
            james.lee James Lee added a comment -

            Looking at the (cluster) logs from around the time of the failure, I see the following:

            ns_server crash(s)

            [error_logger:error,2021-08-27T16:22:38.398Z,ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com:memcached_config_mgr<0.3556.5>:ale_error_logger_handler:do_log:101]
            =========================CRASH REPORT=========================
              crasher:
                initial call: memcached_config_mgr:init/1
                pid: <0.3556.5>
                registered_name: memcached_config_mgr
                exception error: no match of right hand side value missing
                  in function  memcached_config_mgr:read_current_memcached_config/1 (src/memcached_config_mgr.erl, line 280)
                  in call from memcached_config_mgr:init/1 (src/memcached_config_mgr.erl, line 51)
                ancestors: [ns_server_sup,ns_server_nodes_sup,<0.3210.5>,
                              ns_server_cluster_sup,root_sup,<0.140.0>]
                message_queue_len: 0
                messages: []
                links: [<0.3327.5>]
                dictionary: []
                trap_exit: false
                status: running
                heap_size: 6772
                stack_size: 27
                reductions: 45503
              neighbours:
            

            This appears to happen a fair few times, in each case it looks like 'ns_server' is attempting to perform 'memcached' related initialisation.

            I also see the following statement repeatedly in the logs (I'm unsure whether this is related, my local cluster also logs this information).

            ns_couchdb not ready

            [ns_server:debug,2021-08-27T16:34:58.767Z,ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com:<0.6272.6>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:165]ns_couchdb is not ready: {badrpc,nodedown}
            

            During log collection, there also seems to be an issue with this node; we fail to hit the 'diag' endpoint with a 'connection refused' error:

            Connection refused

            ==============================================================================
            couchbase diags
            curl -sS --proxy  -K- http://127.0.0.1:8091/diag
            ==============================================================================
            curl: (7) Failed to connect to 127.0.0.1 port 8091: Connection refused
            

            Show
            james.lee James Lee added a comment - Looking at the (cluster) logs from around the time of the failure, I see the following: ns_server crash(s) [error_logger:error,2021-08-27T16:22:38.398Z,ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com:memcached_config_mgr<0.3556.5>:ale_error_logger_handler:do_log:101] =========================CRASH REPORT========================= crasher: initial call: memcached_config_mgr:init/1 pid: <0.3556.5> registered_name: memcached_config_mgr exception error: no match of right hand side value missing in function memcached_config_mgr:read_current_memcached_config/1 (src/memcached_config_mgr.erl, line 280) in call from memcached_config_mgr:init/1 (src/memcached_config_mgr.erl, line 51) ancestors: [ns_server_sup,ns_server_nodes_sup,<0.3210.5>, ns_server_cluster_sup,root_sup,<0.140.0>] message_queue_len: 0 messages: [] links: [<0.3327.5>] dictionary: [] trap_exit: false status: running heap_size: 6772 stack_size: 27 reductions: 45503 neighbours: This appears to happen a fair few times, in each case it looks like ' ns_server ' is attempting to perform ' memcached ' related initialisation. I also see the following statement repeatedly in the logs (I'm unsure whether this is related, my local cluster also logs this information). ns_couchdb not ready [ns_server:debug,2021-08-27T16:34:58.767Z,ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com:<0.6272.6>:ns_server_nodes_sup:do_wait_link_to_couchdb_node:165]ns_couchdb is not ready: {badrpc,nodedown} During log collection, there also seems to be an issue with this node; we fail to hit the ' diag ' endpoint with a ' connection refused ' error: Connection refused ============================================================================== couchbase diags curl -sS --proxy -K- http://127.0.0.1:8091/diag ============================================================================== curl: (7) Failed to connect to 127.0.0.1 port 8091: Connection refused
            Hide
            james.lee James Lee added a comment -

            Furthermore, looking at the logs on the "good" node, I see lots of failures to receive (chronicle related) votes from the other node, resulting in election failures.

            Example failure

            [chronicle:debug,2021-08-27T16:28:07.036Z,ns_1@172.31.30.96:<0.16608.12>:chronicle_leader:election_worker_loop:909]Failed to get leader vote from 'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com': {error,
                                                                                                      {have_leader,
                                                                                                       {leader,
                                                                                                        #{history_id =>
                                                                                                           <<"73795f6e1984df3a9907695fdfdfed3d">>,
                                                                                                          leader =>
                                                                                                           'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com',
                                                                                                          status =>
                                                                                                           established,
                                                                                                          term =>
                                                                                                           {2,
                                                                                                            'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com'}}}}}
            [chronicle:info,2021-08-27T16:28:07.036Z,ns_1@172.31.30.96:chronicle_leader<0.32013.11>:chronicle_leader:handle_election_result:661]Election failed: {error,{no_quorum,['ns_1@172.31.30.96'],
                                               {4,'ns_1@172.31.30.96'}}}
            

            I suspect this probably isn't an issue with 'cbbs' but more likely an issue with 'ns_server' (or perhaps an environmental issue).

            Show
            james.lee James Lee added a comment - Furthermore, looking at the logs on the "good" node, I see lots of failures to receive (chronicle related) votes from the other node, resulting in election failures. Example failure [chronicle:debug,2021-08-27T16:28:07.036Z,ns_1@172.31.30.96:<0.16608.12>:chronicle_leader:election_worker_loop:909]Failed to get leader vote from 'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com': {error, {have_leader, {leader, #{history_id => <<"73795f6e1984df3a9907695fdfdfed3d">>, leader => 'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com', status => established, term => {2, 'ns_1@ec2-34-221-216-74.us-west-2.compute.amazonaws.com'}}}}} [chronicle:info,2021-08-27T16:28:07.036Z,ns_1@172.31.30.96:chronicle_leader<0.32013.11>:chronicle_leader:handle_election_result:661]Election failed: {error,{no_quorum,['ns_1@172.31.30.96'], {4,'ns_1@172.31.30.96'}}} I suspect this probably isn't an issue with ' cbbs ' but more likely an issue with ' ns_server ' (or perhaps an environmental issue).
            Hide
            james.lee James Lee added a comment -

            Looking at the 'test.log' file provided, I see the following:

            connection refused

            [2021-08-27 09:30:43,189] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused 
            [2021-08-27 09:30:46,224] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused 
            [2021-08-27 09:30:52,262] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused 
            [2021-08-27 09:33:52,502] - [rest_client] [139718135183104] - ERROR - Giving up due to [Errno 111] Connection refused! Tried http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self connect 7 times.
            

            I think there's enough evidence here to suggest this isn't a specific issue with 'cbbs'; it's simply trying (and failing) to connect to the nodes that it's been told about by 'ns_server' (I believe it's behaving as defined). Assigning back to Arunkumar Senthilnathan, I think we should try to eliminate whether this is a testware issue.

            Thanks,
            James

            Show
            james.lee James Lee added a comment - Looking at the ' test.log ' file provided, I see the following: connection refused [2021-08-27 09:30:43,189] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused [2021-08-27 09:30:46,224] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused [2021-08-27 09:30:52,262] - [rest_client] [139718135183104] - ERROR - socket error while connecting to http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self error [Errno 111] Connection refused [2021-08-27 09:33:52,502] - [rest_client] [139718135183104] - ERROR - Giving up due to [Errno 111] Connection refused! Tried http://ec2-34-221-216-74.us-west-2.compute.amazonaws.com:8091/nodes/self connect 7 times. I think there's enough evidence here to suggest this isn't a specific issue with ' cbbs '; it's simply trying (and failing) to connect to the nodes that it's been told about by ' ns_server ' (I believe it's behaving as defined). Assigning back to Arunkumar Senthilnathan , I think we should try to eliminate whether this is a testware issue. Thanks, James
            Hide
            jake.rawsthorne Jake Rawsthorne added a comment - - edited

            Thanks for looking into this James, turns out it was due to blocked ports

            Show
            jake.rawsthorne Jake Rawsthorne added a comment - - edited Thanks for looking into this James, turns out it was due to blocked ports

              People

              Assignee:
              arunkumar Arunkumar Senthilnathan
              Reporter:
              arunkumar Arunkumar Senthilnathan
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

                Dates

                Created:
                Updated:
                Resolved:

                  Gerrit Reviews

                  There are no open Gerrit changes

                    PagerDuty