Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-47048

on_update_failure seen in 6.6.3 runs for timer tests

    XMLWordPrintable

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • 6.6.3
    • None
    • couchbase-bucket
    • 6.6.3-9757
    • Untriaged
    • 1
    • Unknown

    Description

      On resizing a timer noop performance test to be 100% resident on all three source, metadata, and destination buckets, we see the following LCB_ETIMEDOUT - on 6.6.3 runs of the same test. As we resized the previously DGM test to fit completely in memory, we can safely assume that the RR is not causing the timeouts in the test. 

      Resized Test Run : http://perf.jenkins.couchbase.com/job/themis/10918/consoleFull 

      Cbmonitor : http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=themis_663-9757_process_timer_events_97e1  - Attached a chart for RR on source bkt

      Original Test Run : http://perf.jenkins.couchbase.com/job/themis/10773/consoleFull 

      Cbmonitor : http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=themis_663-9744_process_timer_events_3c87 

      Please let us know if this is still a sizing issue. Also, the test does not do any bucket op (read or write to source or destination buckets) in the timer callback function , it is noop test. Ideally, we must not be seeing KV timeouts. Please correct me if I'm wrong.

      Eventing log 

      2021-06-18T05:08:17.898-07:00 [Info] eventing-consumer [worker_perf-test1_14:/tmp/127.0.0.1:8091_14_1135348043.sock:100888] [lcb,server L:646 I:3908674612] Failing command with error LCB_ETIMEDOUT (0x17): {"b":"eventing","i":"00000000e8f9a434/5e06e315d7db9bfc/106f43","l":"172.23.97.177:53098","r":"172.23.96.16:11210","s":"kv:get_cluster_config","t":2500000}
      

      on_update_failure stats from test log : 

      18:32:32 "on_delete_failure": 0, 18:32:32 "on_delete_success": 0, 18:32:32 "on_update_failure": 7, 18:32:32 "on_update_success": 99999993,
      

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            There thousands of slow operations already listed by tracers, interestingly most pertaining to KV node 172.23.96.20:11210 or 172.23.96.23:11210

            2021-06-18T05:04:58.202-07:00 [Info] eventing-consumer [worker_perf-test1_9:/tmp/127.0.0.1:8091_9_1135348043.sock:100782] [lcb,tracer L:158 I:913830256] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60420","last_local_id":"000000003677f170/c0cace665ed92008","last_operation_id":"upsert:0x93c86","last_remote_address":"172.23.96.20:11210","server_us":15,"total_us":520510},{"last_local_address":"172.23.97.177:60420","last_local_id":"000000003677f170/c0cace665ed92008","last_operation_id":"counter:0x957b3","last_remote_address":"172.23.96.20:11210","server_us":15,"total_us":511066}]}
            2021-06-18T05:04:58.205-07:00 [Info] eventing-consumer [worker_perf-test1_10:/tmp/127.0.0.1:8091_10_1135348043.sock:100817] [lcb,tracer L:158 I:32393160] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60426","last_local_id":"0000000001ee47c8/e25832bcbe16d73a","last_operation_id":"counter:0x9109f","last_remote_address":"172.23.96.20:11210","server_us":19,"total_us":993801},{"last_local_address":"172.23.97.177:60426","last_local_id":"0000000001ee47c8/e25832bcbe16d73a","last_operation_id":"upsert:0x92388","last_remote_address":"172.23.96.20:11210","server_us":11,"total_us":530348}]}
            2021-06-18T05:04:58.212-07:00 [Info] eventing-consumer [worker_perf-test1_10:/tmp/127.0.0.1:8091_10_1135348043.sock:100817] [lcb,tracer L:158 I:987529584] Operations over threshold: {"count":1,"service":"kv","top":[{"last_local_address":"172.23.97.177:49248","last_local_id":"000000003adc8170/6e374c3d6be49123","last_operation_id":"counter:0x9c885","last_remote_address":"172.23.96.23:11210","server_us":32,"total_us":507802}]}
            2021-06-18T05:04:58.263-07:00 [Info] eventing-consumer [worker_perf-test1_13:/tmp/127.0.0.1:8091_13_1135348043.sock:100863] [lcb,tracer L:158 I:621370650] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60256","last_local_id":"0000000025095d1a/dd91d013f7964430","last_operation_id":"upsert:0x92987","last_remote_address":"172.23.96.20:11210","server_us":27,"total_us":521313},{"last_local_address":"172.23.97.177:60256","last_local_id":"0000000025095d1a/dd91d013f7964430","last_operation_id":"upsert:0x94465","last_remote_address":"172.23.96.20:11210","server_us":19,"total_us":509984}]}
            

            ➜  cbcollect_info_ns_1@172.23.97.177_20210618-130235 cat ns_server.eventing.log | grep "Operations over threshold:" -c    
            1618
            

            I see 7 timeout errors all in callbacks at different time frames

            ➜  cbcollect_info_ns_1@172.23.97.177_20210618-130235 cat ns_server.eventing.log | grep "\^Error"
            2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:09:03.334-07:00 [Info] eventing-consumer [worker_perf-test1_8:/tmp/127.0.0.1:8091_8_1135348043.sock:100774] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:09:53.953-07:00 [Info] eventing-consumer [worker_perf-test1_17:/tmp/127.0.0.1:8091_17_1135348043.sock:100957] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:11:03.835-07:00 [Info] eventing-consumer [worker_perf-test1_18:/tmp/127.0.0.1:8091_18_1135348043.sock:100961] perf-test1.js:5p    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:11:03.836-07:00 [Info] eventing-consumer [worker_perf-test1_22:/tmp/127.0.0.1:8091_22_1135348043.sock:101034] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:11:29.992-07:00 [Info] eventing-consumer [worker_perf-test1_0:/tmp/127.0.0.1:8091_0_1135348043.sock:100647] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            2021-06-18T05:11:30.262-07:00 [Info] eventing-consumer [worker_perf-test1_0:/tmp/127.0.0.1:8091_0_1135348043.sock:100647] perf-test1.js:50    ^Error    at OnUpdate (perf-test1.js:5:5)
            

            Taking a deeper look at one of the timeouts

            2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] [lcb,server L:646 I:1644684396] Failing command with error LCB_ETIMEDOUT (0x17): {"b":"eventing","i":"000000006207e46c/b76c3c16a31e95bf/178ad9","l":"172.23.97.177:49126","r":"172.23.96.23:11210","s":"kv:incr","t":2500000}
            2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] perf-test1.js:5    ^Error    at OnUpdate (perf-test1.js:5:5)
            

            From the memcached logs on 172.23.96.23 I don't see reported slow operations pertaining to the kv:incr or kv:set requests, but I do see slow runtime reported for workers and STAT operations.

            2021-06-18T05:08:52.531332-07:00 WARNING 152: Slow operation. {"cid":"{"ip":"127.0.0.1","port":47702}/0","duration":"1220 ms","trace":"request=3767327668163457:1220985","command":"STAT","peer":"{"ip":"127.0.0.1","port":47702}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}}
            2021-06-18T05:08:52.980986-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 5' on thread writer_worker_1: 2106 ms
            2021-06-18T05:08:52.982318-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 13' on thread writer_worker_3: 2052 ms
            2021-06-18T05:08:53.042758-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 14' on thread writer_worker_0: 2595 ms
            2021-06-18T05:08:53.097292-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:975' on thread nonIO_worker_5: 54 ms
            2021-06-18T05:08:53.233332-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 17' on thread writer_worker_2: 1671 ms
            2021-06-18T05:08:54.256921-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 21' on thread writer_worker_3: 1274 ms
            2021-06-18T05:08:55.225007-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 12' on thread writer_worker_2: 1991 ms
            2021-06-18T05:08:55.278707-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 21' on thread writer_worker_1: 2297 ms
            2021-06-18T05:08:56.651828-07:00 WARNING 154: Slow operation. {"cid":"{"ip":"127.0.0.1","port":45923}/0","duration":"3193 ms","trace":"request=3767329815652389:3193998","command":"STAT","peer":"{"ip":"127.0.0.1","port":45923}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}}
            2021-06-18T05:08:56.651887-07:00 WARNING (bucket-1) Slow runtime for 'Connection Manager' on thread nonIO_worker_3: 3120 ms
            2021-06-18T05:08:57.746438-07:00 WARNING (bucket-1) Slow runtime for 'Updating stat snapshot on disk' on thread writer_worker_0: 4694 ms
            2021-06-18T05:08:57.807960-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 15' on thread writer_worker_2: 2582 ms
            2021-06-18T05:08:57.879176-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 9' on thread writer_worker_3: 2660 ms
            2021-06-18T05:08:57.881930-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 13' on thread writer_worker_1: 2603 ms
            2021-06-18T05:08:58.040434-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:782' on thread nonIO_worker_0: 57 ms
            2021-06-18T05:08:58.236809-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:898' on thread nonIO_worker_0: 61 ms
            2021-06-18T05:08:58.591443-07:00 WARNING 151: Slow operation. {"cid":"{"ip":"127.0.0.1","port":58852}/0","duration":"1280 ms","trace":"request=3767333668703493:1280562","command":"STAT","peer":"{"ip":"127.0.0.1","port":58852}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}}
            2021-06-18T05:09:00.324647-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 7' on thread writer_worker_0: 2578 ms
            

            Surprisingly, I don't see any logs that indicates slow increment operation as reported by SDK(This is due to the slow runtime of workers?).

            vinayaka.kamath Vinayaka Kamath (Inactive) added a comment - There thousands of slow operations already listed by tracers, interestingly most pertaining to KV node 172.23.96.20:11210 or 172.23.96.23:11210 2021-06-18T05:04:58.202-07:00 [Info] eventing-consumer [worker_perf-test1_9:/tmp/127.0.0.1:8091_9_1135348043.sock:100782] [lcb,tracer L:158 I:913830256] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60420","last_local_id":"000000003677f170/c0cace665ed92008","last_operation_id":"upsert:0x93c86","last_remote_address":"172.23.96.20:11210","server_us":15,"total_us":520510},{"last_local_address":"172.23.97.177:60420","last_local_id":"000000003677f170/c0cace665ed92008","last_operation_id":"counter:0x957b3","last_remote_address":"172.23.96.20:11210","server_us":15,"total_us":511066}]} 2021-06-18T05:04:58.205-07:00 [Info] eventing-consumer [worker_perf-test1_10:/tmp/127.0.0.1:8091_10_1135348043.sock:100817] [lcb,tracer L:158 I:32393160] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60426","last_local_id":"0000000001ee47c8/e25832bcbe16d73a","last_operation_id":"counter:0x9109f","last_remote_address":"172.23.96.20:11210","server_us":19,"total_us":993801},{"last_local_address":"172.23.97.177:60426","last_local_id":"0000000001ee47c8/e25832bcbe16d73a","last_operation_id":"upsert:0x92388","last_remote_address":"172.23.96.20:11210","server_us":11,"total_us":530348}]} 2021-06-18T05:04:58.212-07:00 [Info] eventing-consumer [worker_perf-test1_10:/tmp/127.0.0.1:8091_10_1135348043.sock:100817] [lcb,tracer L:158 I:987529584] Operations over threshold: {"count":1,"service":"kv","top":[{"last_local_address":"172.23.97.177:49248","last_local_id":"000000003adc8170/6e374c3d6be49123","last_operation_id":"counter:0x9c885","last_remote_address":"172.23.96.23:11210","server_us":32,"total_us":507802}]} 2021-06-18T05:04:58.263-07:00 [Info] eventing-consumer [worker_perf-test1_13:/tmp/127.0.0.1:8091_13_1135348043.sock:100863] [lcb,tracer L:158 I:621370650] Operations over threshold: {"count":2,"service":"kv","top":[{"last_local_address":"172.23.97.177:60256","last_local_id":"0000000025095d1a/dd91d013f7964430","last_operation_id":"upsert:0x92987","last_remote_address":"172.23.96.20:11210","server_us":27,"total_us":521313},{"last_local_address":"172.23.97.177:60256","last_local_id":"0000000025095d1a/dd91d013f7964430","last_operation_id":"upsert:0x94465","last_remote_address":"172.23.96.20:11210","server_us":19,"total_us":509984}]} ➜ cbcollect_info_ns_1@172.23.97.177_20210618-130235 cat ns_server.eventing.log | grep "Operations over threshold:" -c 1618 I see 7 timeout errors all in callbacks at different time frames ➜ cbcollect_info_ns_1@172.23.97.177_20210618-130235 cat ns_server.eventing.log | grep "\^Error" 2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:09:03.334-07:00 [Info] eventing-consumer [worker_perf-test1_8:/tmp/127.0.0.1:8091_8_1135348043.sock:100774] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:09:53.953-07:00 [Info] eventing-consumer [worker_perf-test1_17:/tmp/127.0.0.1:8091_17_1135348043.sock:100957] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:11:03.835-07:00 [Info] eventing-consumer [worker_perf-test1_18:/tmp/127.0.0.1:8091_18_1135348043.sock:100961] perf-test1.js:5p ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:11:03.836-07:00 [Info] eventing-consumer [worker_perf-test1_22:/tmp/127.0.0.1:8091_22_1135348043.sock:101034] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:11:29.992-07:00 [Info] eventing-consumer [worker_perf-test1_0:/tmp/127.0.0.1:8091_0_1135348043.sock:100647] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) 2021-06-18T05:11:30.262-07:00 [Info] eventing-consumer [worker_perf-test1_0:/tmp/127.0.0.1:8091_0_1135348043.sock:100647] perf-test1.js:50 ^Error at OnUpdate (perf-test1.js:5:5) Taking a deeper look at one of the timeouts 2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] [lcb,server L:646 I:1644684396] Failing command with error LCB_ETIMEDOUT (0x17): {"b":"eventing","i":"000000006207e46c/b76c3c16a31e95bf/178ad9","l":"172.23.97.177:49126","r":"172.23.96.23:11210","s":"kv:incr","t":2500000} 2021-06-18T05:08:55.962-07:00 [Info] eventing-consumer [worker_perf-test1_19:/tmp/127.0.0.1:8091_19_1135348043.sock:100974] perf-test1.js:5 ^Error at OnUpdate (perf-test1.js:5:5) From the memcached logs on 172.23.96.23 I don't see reported slow operations pertaining to the kv:incr or kv:set requests, but I do see slow runtime reported for workers and STAT operations. 2021-06-18T05:08:52.531332-07:00 WARNING 152: Slow operation. {"cid":"{"ip":"127.0.0.1","port":47702}/0","duration":"1220 ms","trace":"request=3767327668163457:1220985","command":"STAT","peer":"{"ip":"127.0.0.1","port":47702}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}} 2021-06-18T05:08:52.980986-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 5' on thread writer_worker_1: 2106 ms 2021-06-18T05:08:52.982318-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 13' on thread writer_worker_3: 2052 ms 2021-06-18T05:08:53.042758-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 14' on thread writer_worker_0: 2595 ms 2021-06-18T05:08:53.097292-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:975' on thread nonIO_worker_5: 54 ms 2021-06-18T05:08:53.233332-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 17' on thread writer_worker_2: 1671 ms 2021-06-18T05:08:54.256921-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 21' on thread writer_worker_3: 1274 ms 2021-06-18T05:08:55.225007-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 12' on thread writer_worker_2: 1991 ms 2021-06-18T05:08:55.278707-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 21' on thread writer_worker_1: 2297 ms 2021-06-18T05:08:56.651828-07:00 WARNING 154: Slow operation. {"cid":"{"ip":"127.0.0.1","port":45923}/0","duration":"3193 ms","trace":"request=3767329815652389:3193998","command":"STAT","peer":"{"ip":"127.0.0.1","port":45923}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}} 2021-06-18T05:08:56.651887-07:00 WARNING (bucket-1) Slow runtime for 'Connection Manager' on thread nonIO_worker_3: 3120 ms 2021-06-18T05:08:57.746438-07:00 WARNING (bucket-1) Slow runtime for 'Updating stat snapshot on disk' on thread writer_worker_0: 4694 ms 2021-06-18T05:08:57.807960-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 15' on thread writer_worker_2: 2582 ms 2021-06-18T05:08:57.879176-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 9' on thread writer_worker_3: 2660 ms 2021-06-18T05:08:57.881930-07:00 WARNING (bucket-1) Slow runtime for 'Running a flusher loop: shard 13' on thread writer_worker_1: 2603 ms 2021-06-18T05:08:58.040434-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:782' on thread nonIO_worker_0: 57 ms 2021-06-18T05:08:58.236809-07:00 WARNING (bucket-1) Slow runtime for 'Checkpoint Remover on vb:898' on thread nonIO_worker_0: 61 ms 2021-06-18T05:08:58.591443-07:00 WARNING 151: Slow operation. {"cid":"{"ip":"127.0.0.1","port":58852}/0","duration":"1280 ms","trace":"request=3767333668703493:1280562","command":"STAT","peer":"{"ip":"127.0.0.1","port":58852}","bucket":"bucket-1","packet":{"bodylen":8,"cas":0,"datatype":"raw","extlen":0,"key":"<ud>dcpagg.:</ud>","keylen":8,"magic":"ClientRequest","opaque":0,"opcode":"STAT","vbucket":0}} 2021-06-18T05:09:00.324647-07:00 WARNING (eventing) Slow runtime for 'Running a flusher loop: shard 7' on thread writer_worker_0: 2578 ms Surprisingly, I don't see any logs that indicates slow increment operation as reported by SDK(This is due to the slow runtime of workers?).

            Daniel Owen Can someone from the KV team please take a look at this? Please let you know whether this seems like a sizing issue or not.

            vinayaka.kamath Vinayaka Kamath (Inactive) added a comment - Daniel Owen Can someone from the KV team please take a look at this? Please let you know whether this seems like a sizing issue or not.
            owend Daniel Owen added a comment -

            Hi Vinayaka Kamath I see the slow STAT with key: dcpagg so I think you are suffering from MB-38978.
            Operations can get stuck behind these slow stats call that block a front-end thread.

            We are planning to resolve in 7.0.1

            owend Daniel Owen added a comment - Hi Vinayaka Kamath I see the slow STAT with key: dcpagg so I think you are suffering from MB-38978 . Operations can get stuck behind these slow stats call that block a front-end thread. We are planning to resolve in 7.0.1
            jeelan.poola Jeelan Poola added a comment -

            Daniel Owen Should we resolve it as a Dup of MB-38978?

            jeelan.poola Jeelan Poola added a comment - Daniel Owen Should we resolve it as a Dup of MB-38978 ?
            owend Daniel Owen added a comment -

            Hi Jeelan Poola yes I think that make sense - thanks

            owend Daniel Owen added a comment - Hi Jeelan Poola yes I think that make sense - thanks

            Closing duplicate bugs.

            ashwin.govindarajulu Ashwin Govindarajulu added a comment - Closing duplicate bugs.

            People

              vikas.chaudhary Vikas Chaudhary
              prajwal.kirankumar Prajwal‌ Kiran Kumar‌ (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty