Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-37105

[System test]: index rebalance failed with linked_process_died

    XMLWordPrintable

Details

    • Bug
    • Status: Closed
    • Critical
    • Resolution: Won't Do
    • 6.5.0
    • 6.5.0
    • secondary-index

    Description

      Build: 6.5.0-4908 not seen on 4890

      Test: MH longevity with durability

      Cycle: 1st

      Day: 1st

      Test Step:

      Adding 1 kv node , failing over 2 kv nodes

      [2019-11-29T05:39:11-08:00, sequoiatools/couchbase-cli:6.5:6f3223] server-add -c 172.23.108.103:8091 --server-add https://172.23.106.100 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
      [2019-11-29T05:44:34-08:00, sequoiatools/couchbase-cli:6.5:9bd07d] failover -c 172.23.108.103:8091 --server-failover 172.23.96.148:8091 -u Administrator -p password
      [2019-11-29T05:54:36-08:00, sequoiatools/couchbase-cli:6.5:49a079] failover -c 172.23.108.103:8091 --server-failover 172.23.97.239:8091 -u Administrator -p password --force
      [2019-11-29T05:55:12-08:00, sequoiatools/couchbase-cli:6.5:a64cf0] rebalance -c 172.23.108.103:8091 -u Administrator -p password
      → 
       
       
      Error occurred on container - sequoiatools/couchbase-cli:6.5:[rebalance -c 172.23.108.103:8091 -u Administrator -p password]
       
       
      docker logs a64cf0
      docker start a64cf0
       
       
      *Unable to display progress bar on this os
      JERROR: Rebalance failed. See logs for detailed reason. You can try again.
      [2019-11-29T06:07:29-08:00, sequoiatools/cmd:1b4508] 60 

      Rebalance failed

      [user:error,2019-11-29T06:07:13.393-08:00,ns_1@172.23.108.103:<0.12064.0>:ns_orchestrator:log_rebalance_completion:1445]Rebalance exited with reason {service_rebalance_failed,index,
                                    {agent_died,<25872.13651.49>,
                                     {linked_process_died,<25872.27917.65>,
                                      {timeout,
                                       {gen_server,call,
                                        [<25872.14398.49>,
                                         {call,"ServiceAPI.GetTaskList",
                                          #Fun<json_rpc_connection.0.102434519>},
                                         60000]}}}}}.
      Rebalance Operation Id = d95b685622ed7bc5b24e520d79354c33 

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            No changes related to rebalance in GSI between the mentioned builds.
            CHANGELOG for indexing

            • Commit: 5dde6ffd190794a501661f9f26741d27b2e6f3cc in build: 6.5.0-4894
              Merge remote-tracking branch 'couchbase/unstable' into HEAD

            http://ci2i-unstable.northscale.in/gsi-26.11.2019-16.01.pass.html

            Change-Id: I3d26db6e7f1ae8f5d3b8e5887eddc8945eb1f38a

            • Commit: 1232169688e6a5e98b39d54447974a9ca68a9658 in build: 6.5.0-4894
              MB-36964: Fix recoverableCreateIndex in case of planner error

            If planner fails to generate a solution that satisfies the
            connstraints, a round robin approach is used to allow index
            creation. But the layout generated by the round robin approach
            is not used during commit phase.

            The fix ensures that the layout generated by the round robin
            approach is used during commit phase.

            Change-Id: Ie108946cb75c0d81e3c0d92a5742c7e062866531

            jeelan.poola Jeelan Poola added a comment - No changes related to rebalance in GSI between the mentioned builds. CHANGELOG for indexing Commit: 5dde6ffd190794a501661f9f26741d27b2e6f3cc in build: 6.5.0-4894 Merge remote-tracking branch 'couchbase/unstable' into HEAD http://ci2i-unstable.northscale.in/gsi-26.11.2019-16.01.pass.html Change-Id: I3d26db6e7f1ae8f5d3b8e5887eddc8945eb1f38a Commit: 1232169688e6a5e98b39d54447974a9ca68a9658 in build: 6.5.0-4894 MB-36964 : Fix recoverableCreateIndex in case of planner error If planner fails to generate a solution that satisfies the connstraints, a round robin approach is used to allow index creation. But the layout generated by the round robin approach is not used during commit phase. The fix ensures that the layout generated by the round robin approach is used during commit phase. Change-Id: Ie108946cb75c0d81e3c0d92a5742c7e062866531
            varun.velamuri Varun Velamuri added a comment - - edited

            ns_server made a PrepareTopologyChange request at 2019-11-29T06:06:27.319-08:00

            [json_rpc:debug,2019-11-29T06:06:27.319-08:00,ns_1@172.23.99.11:json_rpc_connection-index-service_api<0.14398.49>:json_rpc_connection:handle_call:158]sending jsonrpc call:{[{jsonrpc,<<"2.0">>},
                                   {id,455},
                                   {method,<<"ServiceAPI.PrepareTopologyChange">>},
                                   {params,
                                    [{[{id,<<"214c37347b241b0fb84989e088a4a6c6">>},
                                       {currentTopologyRev,null},
                                       {type,<<"topology-change-rebalance">>},
                                       {keepNodes,
             
                                            {[{nodeInfo,
                                            {[{nodeId,
                                               <<"b0b21e4fe6c3ecb53a8ba9363ae2021e">>},
                                              {priority,4},
                                              {opaque,null}]}},
                                           {recoveryType,<<"recovery-full">>}]}]},
                                       {ejectNodes,[]}]}]}]}
            

            On node 172.23.99.11 (UUID: b0b21e4fe6c3ecb53a8ba9363ae2021e), indexer got a topology change request at 2019-11-29T06:06:27.320

            2019-11-29T06:06:27.320-08:00 [Info] ServiceMgr::PrepareTopologyChange {214c37347b241b0fb84989e088a4a6c6 [] topology-change-rebalance [{{07443af6d03155a6f99d124dea1c97ea 4 <nil>} recovery-full} {{2460550db29f3c2e12c3413400b056c7 4 <nil>} recovery-full} {{2aed3e2db80e725dde813151e9cdc693 4 <nil>} recovery-full} {{68e031b4ec2bf7040f6585537f93d273 4 <nil>} recovery-full} {{b0b21e4fe6c3ecb53a8ba9363ae2021e 4 <nil>} recovery-full}] []}

            The DDL running check has been completed 6 seconds later:

            2019-11-29T06:06:33.427-08:00 [Info] ServiceMgr::prepareRebalance Found DDL Running []
            2019-11-29T06:06:33.427-08:00 [Info] ServiceMgr::prepareRebalance Init Prepare Phase
            

            The notification sent to cluster manager was processed almost 47 seconds later.

            2019-11-29T06:07:20.822-08:00 [Info] ClustMgr:handleSetLocalValue Key RebalanceRunning Value
            2019-11-29T06:07:20.823-08:00 [Info] ClustMgr:handleRebalanceRunning &{61 [] {false false false false false false false0 false 
            

            These delays are possibly due the stream repair that is going on at indexer side. If there are multiple queued requests at indexer side, indexer has to process all those requests before processing the rebalance request.

            Rebalance timed out at: 2019-11-29T06:07:13.393

            2019-11-29T06:07:13.393-08:00, ns_orchestrator:0:critical:message(ns_1@172.23.108.103) - Rebalance exited with reason {service_rebalance_failed,index,
                                          {agent_died,<25872.13651.49>,
                                           {linked_process_died,<25872.27917.65>,
                                            {timeout,
                                             {gen_server,call,
                                              [<25872.14398.49>,
                                               {call,"ServiceAPI.GetTaskList",
                                                #Fun<json_rpc_connection.0.102434519>},
                                               60000]}}}}}.
            

            The PrepareTopologyChange request was made at 2019-11-29T06:06:27.319-08:00 and rebalance timed out at 2019-11-29T06:07:13.393-08:00 (i.e. after 46 seconds) while the timeout seems to be 60 seconds. Not sure if this is expected. Requesting ns_server team to take a look at this issue to see if the rebalance timeout is as expected. 

            varun.velamuri Varun Velamuri added a comment - - edited ns_server made a PrepareTopologyChange request at  2019-11-29T06:06:27.319-08:00 [json_rpc:debug, 2019 - 11 -29T06: 06 : 27.319 - 08 : 00 ,ns_1 @172 .23. 99.11 :json_rpc_connection-index-service_api< 0.14398 . 49 >:json_rpc_connection:handle_call: 158 ]sending jsonrpc call:{[{jsonrpc,<< "2.0" >>},                        {id, 455 },                        {method,<< "ServiceAPI.PrepareTopologyChange" >>},                        {params,                         [{[{id,<< "214c37347b241b0fb84989e088a4a6c6" >>},                            {currentTopologyRev, null },                            {type,<< "topology-change-rebalance" >>},                            {keepNodes,   {[{nodeInfo,                                 {[{nodeId,                                    << "b0b21e4fe6c3ecb53a8ba9363ae2021e" >>},                                   {priority, 4 },                                   {opaque, null }]}},                                {recoveryType,<< "recovery-full" >>}]}]},                            {ejectNodes,[]}]}]}]} On node 172.23.99.11 (UUID: b0b21e4fe6c3ecb53a8ba9363ae2021e), indexer got a topology change request at  2019-11-29T06:06:27.320 2019 - 11 -29T06: 06 : 27.320 - 08 : 00 [Info] ServiceMgr::PrepareTopologyChange {214c37347b241b0fb84989e088a4a6c6 [] topology-change-rebalance [{{07443af6d03155a6f99d124dea1c97ea 4 <nil>} recovery-full} {{2460550db29f3c2e12c3413400b056c7 4 <nil>} recovery-full} {{2aed3e2db80e725dde813151e9cdc693 4 <nil>} recovery-full} {{68e031b4ec2bf7040f6585537f93d273 4 <nil>} recovery-full} {{b0b21e4fe6c3ecb53a8ba9363ae2021e 4 <nil>} recovery-full}] []} The DDL running check has been completed 6 seconds later: 2019 - 11 -29T06: 06 : 33.427 - 08 : 00 [Info] ServiceMgr::prepareRebalance Found DDL Running [] 2019 - 11 -29T06: 06 : 33.427 - 08 : 00 [Info] ServiceMgr::prepareRebalance Init Prepare Phase The notification sent to cluster manager was processed almost 47 seconds later. 2019 - 11 -29T06: 07 : 20.822 - 08 : 00 [Info] ClustMgr:handleSetLocalValue Key RebalanceRunning Value 2019 - 11 -29T06: 07 : 20.823 - 08 : 00 [Info] ClustMgr:handleRebalanceRunning &{ 61 [] { false false false false false false false }  0 false   These delays are possibly due the stream repair that is going on at indexer side. If there are multiple queued requests at indexer side, indexer has to process all those requests before processing the rebalance request. Rebalance timed out at:  2019-11-29T06:07:13.393 2019 - 11 -29T06: 07 : 13.393 - 08 : 00 , ns_orchestrator: 0 :critical:message(ns_1 @172 .23. 108.103 ) - Rebalance exited with reason {service_rebalance_failed,index,                               {agent_died,< 25872.13651 . 49 >,                                {linked_process_died,< 25872.27917 . 65 >,                                 {timeout,                                  {gen_server,call,                                   [< 25872.14398 . 49 >,                                    {call, "ServiceAPI.GetTaskList" ,                                     #Fun<json_rpc_connection. 0.102434519 >},                                    60000 ]}}}}}. The PrepareTopologyChange request was made at  2019-11-29T06:06:27.319-08:00 and rebalance timed out at  2019-11-29T06:07:13.393-08:00 (i.e. after 46 seconds) while the timeout seems to be 60 seconds. Not sure if this is expected. Requesting ns_server team to take a look at this issue to see if the rebalance timeout is as expected. 
            Aliaksey Artamonau Aliaksey Artamonau added a comment - - edited

            This is the call that times out:

            [json_rpc:debug,2019-11-29T06:06:13.356-08:00,ns_1@172.23.99.11:json_rpc_connection-index-service_api<0.14398.49>:json_rpc_connection:handle_
            call:158]sending jsonrpc call:{[{jsonrpc,<<"2.0">>},
                                   {id,452},
                                   {method,<<"ServiceAPI.GetTaskList">>},
                                   {params,[{[{rev,<<"AAAAAAAAAAQ=">>},
                                              {timeout,30000}]}]}]}
            

            We ask the indexer to return the task list within 30 seconds, it fails to do so in 60 seconds. Hence the failure.

            Aliaksey Artamonau Aliaksey Artamonau added a comment - - edited This is the call that times out: [json_rpc:debug,2019-11-29T06:06:13.356-08:00,ns_1@172.23.99.11:json_rpc_connection-index-service_api<0.14398.49>:json_rpc_connection:handle_ call:158]sending jsonrpc call:{[{jsonrpc,<<"2.0">>}, {id,452}, {method,<<"ServiceAPI.GetTaskList">>}, {params,[{[{rev,<<"AAAAAAAAAAQ=">>}, {timeout,30000}]}]}]} We ask the indexer to return the task list within 30 seconds, it fails to do so in 60 seconds. Hence the failure.

            Aliaksey Artamonau, Thanks for the update. I was clearly looking at the wrong call. I now understand why rebalance timed out at 2019-11-29T06:07:13.393-08:00.

            I will continue my investigations on indexer side to see why indexer could not respond with in 30 seconds.

            varun.velamuri Varun Velamuri added a comment - Aliaksey Artamonau , Thanks for the update. I was clearly looking at the wrong call. I now understand why rebalance timed out at 2019-11-29T06:07:13.393-08:00. I will continue my investigations on indexer side to see why indexer could not respond with in 30 seconds.
            varun.velamuri Varun Velamuri added a comment - - edited

            When there are multiple buckets rolling back, there is one code path on indexer side that could block the indexer main loop. Following sequence of events can trigger it:

            1. Timekeeper asks indexer to initiate recovery
            2.  Indexer initiates recovery and tries to process rollback
            3.  In processRollback(), indexer sends a message to storage manager and waits for response from storage manager. Storage manager responds to indexer immediately and tries to rollback all indexes
            4.  Now, if there is another rollback request belonging to different bucket, then indexer has to wait until first rollback request is completed

            For the indexer on node 172.23.99.11, consider the following logs:

            2019-11-29T06:06:11.110-08:00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket CUSTOMER STREAM_RECOVERY
            2019-11-29T06:06:11.110-08:00 [Info] StorageMgr::handleRollback rollbackTs is bucket: CUSTOMER, vbuckets: 1024 Crc64: 0 snapType NO_SNAP -
            2019-11-29T06:06:11.115-08:00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket WAREHOUSE STREAM_RECOVERY

            The request to handleInitRecovery for bucket WAREHOUSE arrived at 2019-11-29T06:06:11.115 but it could be processed only after the first rollback request (i.e for bucket CUSTOMER) has finished. The next request indexer could process from it's main loop is prepare done message. It looks like this message was processed after the rollback for bucket CUSTOMER finished

            2019-11-29T06:06:22.080-08:00 [Info] Indexer::handlePrepareDone StreamId MAINT_STREAM Bucket STOCK
            2019-11-29T06:06:22.080-08:00 [Info] StorageMgr::handleRollback rollbackTs is bucket: WAREHOUSE, vbuckets: 1024 Crc64: 0 snapType NO_SNAP -

            varun.velamuri Varun Velamuri added a comment - - edited When there are multiple buckets rolling back, there is one code path on indexer side that could block the indexer main loop. Following sequence of events can trigger it: Timekeeper asks indexer to initiate recovery  Indexer initiates recovery and tries to process rollback  In processRollback() , indexer sends a message to storage manager and waits for response from storage manager. Storage manager responds to indexer immediately and tries to rollback all indexes  Now, if there is another rollback request belonging to different bucket, then indexer has to wait until first rollback request is completed For the indexer on node 172.23.99.11, consider the following logs: 2019 - 11 -29T06: 06 : 11.110 - 08 : 00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket CUSTOMER STREAM_RECOVERY 2019 - 11 -29T06: 06 : 11.110 - 08 : 00 [Info] StorageMgr::handleRollback rollbackTs is bucket: CUSTOMER, vbuckets: 1024 Crc64: 0 snapType NO_SNAP - 2019 - 11 -29T06: 06 : 11.115 - 08 : 00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket WAREHOUSE STREAM_RECOVERY The request to handleInitRecovery for bucket WAREHOUSE arrived at 2019-11-29T06:06:11.115 but it could be processed only after the first rollback request (i.e for bucket CUSTOMER) has finished. The next request indexer could process from it's main loop is prepare done message. It looks like this message was processed after the rollback for bucket CUSTOMER finished 2019 - 11 -29T06: 06 : 22.080 - 08 : 00 [Info] Indexer::handlePrepareDone StreamId MAINT_STREAM Bucket STOCK 2019 - 11 -29T06: 06 : 22.080 - 08 : 00 [Info] StorageMgr::handleRollback rollbackTs is bucket: WAREHOUSE, vbuckets: 1024 Crc64: 0 snapType NO_SNAP -

            Applying the same logic to the timeframe in which rebalance has failed:

            Timekeeper sent a initiateRecovery for bucket STOCK

            2019-11-29T06:06:22.081-08:00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket STOCK SessionId 1 RestartTs bucket: STOCK, vbuckets:1024 Crc64: 14463349105642428561 snapType INMEM_SNAP -
            

            However, indexer was stuck waiting to process rollback for bucket WAREHOUSE (Index 13355470894422758037 belongs to bucket WAREHOUSE)

            2019-11-29T06:06:22.123-08:00 [Info] StorageMgr::handleRollback Rollback Index: 13355470894422758037 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count:3 committed:false 
            

            Indexer got prepare topology change at 2019-11-29T06:06:27.320-08:00

            2019-11-29T06:06:27.320-08:00 [Info] ServiceMgr::PrepareTopologyChange
            

            As a part of prepareTopologyChange, rebalance service manager send a message to indexer to check DDL running. However, it will be processed after the storage finishes rollback for bucket WAREHOUSE and indexer processes initateRecovery for bucket STOCK.In the mean time, there have been multiple initiateRecovery requests from timekeeper

            2019-11-29T06:06:32.427-08:00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket NEW_ORDER SessionId 3 RestartTs bucket: NEW_ORDER, vbuckets: 1024 Crc64: 15483437431460244926 snapType INMEM_SNAP -
            2019-11-29T06:06:32.429-08:00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket ITEM SessionId 3 RestartTs bucket: ITEM, vbuckets: 1024 Crc64: 11607813187078538988 snapType INMEM_SNAP -
            2019-11-29T06:06:32.430-08:00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket HISTORY SessionId 1 RestartTs bucket: HISTORY, vbuckets: 1024 Crc64: 17135845864963193340 snapType INMEM_SNAP -
            2019-11-29T06:06:32.432-08:00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket ORDERS SessionId 1 RestartTs bucket: ORDERS, vbuckets: 1024 Crc64: 7597314664672643693 snapType INMEM_SNAP -
            

            Indexer could process the initateRecovery request for bucket STOCK at 2019-11-29T06:06:33.406-08:00

            2019-11-29T06:06:33.406-08:00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket STOCK STREAM_RECOVERY
            

            Immediately, after this, the checkDDL running was processed by indexer:

            2019-11-29T06:06:33.427-08:00 [Info] ServiceMgr::prepareRebalance Found DDL Running []
            2019-11-29T06:06:33.427-08:00 [Info] ServiceMgr::prepareRebalance Init Prepare Phase
            

            As a part of prepare topology change, rebalance manager would send another request to cluster manager via indexer. This request would be processed after all initiateRecovery requests for different buckets are processed. This happened at 2019-11-29T06:07:20.822-08:00. By this time rebalance timed out

            2019-11-29T06:07:20.822-08:00 [Info] ClustMgr:handleSetLocalValue Key RebalanceRunning Value
            2019-11-29T06:07:20.823-08:00 [Info] ClustMgr:handleRebalanceRunning &\{61 [] {false false false false false false false}  0 false
            

            varun.velamuri Varun Velamuri added a comment - Applying the same logic to the timeframe in which rebalance has failed: Timekeeper sent a initiateRecovery for bucket STOCK 2019 - 11 -29T06: 06 : 22.081 - 08 : 00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket STOCK SessionId 1 RestartTs bucket: STOCK, vbuckets: 1024 Crc64: 14463349105642428561 snapType INMEM_SNAP - However, indexer was stuck waiting to process rollback for bucket WAREHOUSE (Index 13355470894422758037 belongs to bucket WAREHOUSE) 2019 - 11 -29T06: 06 : 22.123 - 08 : 00 [Info] StorageMgr::handleRollback Rollback Index: 13355470894422758037 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count: 3 committed: false Indexer got prepare topology change at 2019-11-29T06:06:27.320-08:00 2019 - 11 -29T06: 06 : 27.320 - 08 : 00 [Info] ServiceMgr::PrepareTopologyChange As a part of prepareTopologyChange, rebalance service manager send a message to indexer to check DDL running. However, it will be processed after the storage finishes rollback for bucket WAREHOUSE and indexer processes initateRecovery for bucket STOCK.In the mean time, there have been multiple initiateRecovery requests from timekeeper 2019 - 11 -29T06: 06 : 32.427 - 08 : 00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket NEW_ORDER SessionId 3 RestartTs bucket: NEW_ORDER, vbuckets: 1024 Crc64: 15483437431460244926 snapType INMEM_SNAP - 2019 - 11 -29T06: 06 : 32.429 - 08 : 00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket ITEM SessionId 3 RestartTs bucket: ITEM, vbuckets: 1024 Crc64: 11607813187078538988 snapType INMEM_SNAP - 2019 - 11 -29T06: 06 : 32.430 - 08 : 00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket HISTORY SessionId 1 RestartTs bucket: HISTORY, vbuckets: 1024 Crc64: 17135845864963193340 snapType INMEM_SNAP - 2019 - 11 -29T06: 06 : 32.432 - 08 : 00 [Info] Timekeeper::initiateRecovery StreamId MAINT_STREAM Bucket ORDERS SessionId 1 RestartTs bucket: ORDERS, vbuckets: 1024 Crc64: 7597314664672643693 snapType INMEM_SNAP - Indexer could process the initateRecovery request for bucket STOCK at 2019-11-29T06:06:33.406-08:00 2019 - 11 -29T06: 06 : 33.406 - 08 : 00 [Info] Indexer::handleInitRecovery StreamId MAINT_STREAM Bucket STOCK STREAM_RECOVERY Immediately, after this, the checkDDL running was processed by indexer: 2019 - 11 -29T06: 06 : 33.427 - 08 : 00 [Info] ServiceMgr::prepareRebalance Found DDL Running [] 2019 - 11 -29T06: 06 : 33.427 - 08 : 00 [Info] ServiceMgr::prepareRebalance Init Prepare Phase As a part of prepare topology change, rebalance manager would send another request to cluster manager via indexer. This request would be processed after all initiateRecovery requests for different buckets are processed. This happened at 2019-11-29T06:07:20.822-08:00 . By this time rebalance timed out 2019 - 11 -29T06: 07 : 20.822 - 08 : 00 [Info] ClustMgr:handleSetLocalValue Key RebalanceRunning Value 2019 - 11 -29T06: 07 : 20.823 - 08 : 00 [Info] ClustMgr:handleRebalanceRunning &\{ 61 [] { false false false false false false false } 0 false
            varun.velamuri Varun Velamuri added a comment - - edited

            Filed improvement MB-37132 for CC to make storage manager handle rollback requests from multiple buckets concurrently.

            Closing this issue as Won't do for MH.

            varun.velamuri Varun Velamuri added a comment - - edited Filed improvement MB-37132 for CC to make storage manager handle rollback requests from multiple buckets concurrently. Closing this issue as Won't do for MH.

            People

              varun.velamuri Varun Velamuri
              vikas.chaudhary Vikas Chaudhary
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty