Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-53186

[6.6.5 build 10104] - Multiple primary Indexes rollback to zero after KV node auto failover

    XMLWordPrintable

Details

    • Bug
    • Status: Closed
    • Major
    • Resolution: Duplicate
    • 6.6.5
    • 6.6.5, 7.1.2
    • secondary-index
    • None
    • Enterprise Edition 6.6.5 build 10104
    • Untriaged
    • Centos 64-bit
    • 1
    • No

    Description

      Steps to Repro
      1. Create a 6 node cluster with 3kv, 2 indexing and 1 n1ql nodes.
      2. Create buckets/data/indexes and push buckets to dgm and ensure indexes are in DGM as well. Start running queries in background with request_plus consistency level.
      3. Ran the following script to validate MB-53057 which kills memcached(on 172.23.100.34), waits for AF to kick in, does full recovery and then rebalances in an infinite loop.

      #!/bin/bash
      while :
      do
          echo "killing memcached..."
          kill -9 `pidof memcached`
          echo "Waiting for auto failover to kick in..."
          sleep 180
          echo "Listing node status post Auto failover..."
          /opt/couchbase/bin/couchbase-cli server-list -c localhost:8091 --username Administrator --password password
          sleep 30
          echo "Starting full recovery..."    
          /opt/couchbase/bin/couchbase-cli recovery -c localhost:8091 --username Administrator --password password --server-recovery 172.23.100.34:8091 --recovery-type full
          sleep 30
          echo "Starting Rebalance after recovering a failed over node..."  
          /opt/couchbase/bin/couchbase-cli rebalance -c localhost:8091 --username Administrator --password password
          sleep 4000
          echo "Listing rebalance status..."
          /opt/couchbase/bin/couchbase-cli rebalance-status -c localhost:8091 --username Administrator --password password
          sleep 30
          echo "Listing node status post rebalance..."
          /opt/couchbase/bin/couchbase-cli server-list -c localhost:8091 --username Administrator --password password
          sleep 300
      done
      

      Exactly same test as the one in MB-53180. However in this case, It seems like we have rolled back 2 primary indexes.

      172.23.106.159 : index

      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Zero 
      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::rollbackAllToZero MAINT_STREAM test4
      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:07:37.940-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Zero 
      

      172.23.106.163 : index

      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:09:02.679-07:00 [Info] StorageMgr::handleRollback Rollback Index: 3721238277937800766 PartitionId: 0 SliceId: 0 To Zero 
      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:09:02.679-07:00 [Info] StorageMgr::rollbackAllToZero MAINT_STREAM test2
      /opt/couchbase/var/lib/couchbase/logs/indexer.log:2022-07-29T05:09:02.784-07:00 [Info] StorageMgr::handleRollback Rollback Index: 3721238277937800766 PartitionId: 0 SliceId: 0 To Zero 
      

      cbcollect_info attached.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            Considering the rollback on node 106.156

            a. Indexer received a stream end and that triggered repair

            2022-07-29T05:05:46.012-07:00 [Info] TK StreamEnd MAINT_STREAM test4 95 238793007962108 337127
            2022-07-29T05:05:46.012-07:00 [Info] Timekeeper::handleStreamEnd RepairStream due to StreamEnd. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 95 Vbuuid: 238793007962108 Seqno: 337127 FirstSnap: false
            

             This is due to memcached exit

            2022-07-29T05:05:46.203-07:00, ns_log:0:info:message(ns_1@172.23.100.34) - Service 'memcached' exited with status 137. Restarting. Messages:
            

            b. Indexer triggered repair with what ever snapshot it has in memory

            2022-07-29T05:05:50.914-07:00 [Info] Timekeeper::sendRestartMsg Received KV Repair Msg For Stream MAINT_STREAM Bucket test4. Attempting Stream Repair.
            ...
            2022-07-29T05:05:50.914-07:00 [Info] Indexer::startBucketStream Stream: MAINT_STREAM Bucket: test4 SessionId 11 RestartTS bucket: test4, vbuckets: 1024 Crc64: 15976588854664318510 snapType FORCE_COMMIT -
                {vbno, vbuuid, seqno, snapshot-start, snapshot-end}
                {    0     c653a5265be9     363572     363572     363572}
                {    1     7b222d71ff95     444385     444385     444385}
            

            c. Indexer was asked to rollback to non-zero seqno

            2022-07-29T05:06:01.248-07:00 [Info] TK StreamBegin MAINT_STREAM test4 0 218062555339753 363569 11
            2022-07-29T05:06:01.248-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 0. Rollback (363569,     c653a5265be9).
            

            This was because memcached branched at different point

            2022-07-29T05:06:01.221867-07:00 WARNING 218: (test4) DCP (Producer) eq_dcpq:secidx:proj-test4-MAINT_STREAM_TOPIC_34731e3b41effefed0e0b0f46bd7e5b0-2371691353212279640/0 - (vb:0) Stream request requires rollback to seqno:363569 because consumer ahead of producer - producer upper at 363569. Client requested seqnos:\{363572,18446744073709551615} snapshot:\{363572,363572} uuid:218062555339753
            

            2022-07-29T05:05:49.999468-07:00 INFO (test4) VBucket: created vb:0 with state:active initialState:active lastSeqno:363569 persistedRange:\{363569,363569} max_cas:1659096341942894592 uuid:218062555339753 topology:[["ns_1@172.23.100.34","ns_1@172.23.105.37"]]
            2022-07-29T05:05:49.999512-07:00 INFO (test4) Warmup::createVBuckets: vb:0 created new failover entry with uuid:147751465700978 and seqno:363569 due to unclean shutdown
            

            d. As indexer saw rollback, it triggered repair using disk snapshot

            2022-07-29T05:06:32.268-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 11. Sending Init Prepare.
            ...
            2022-07-29T05:06:34.830-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877 committed:false)
            2022-07-29T05:06:35.447-07:00 [Info] Indexer::startBucketStream Stream: MAINT_STREAM Bucket: test4 SessionId 12 RestartTS bucket: test4, vbuckets: 1024 Crc64: 17143015797484481331 snapType FORCE_COMMIT -
                {vbno, vbuuid, seqno, snapshot-start, snapshot-end}
                {    0     c653a5265be9     362284     362284     362284}
                {    1     7b222d71ff95     442728     442725     442728}
                {    2     f0f9a32bc345     278961     278961     278961}
            

            e. This restart went successful 

            2022-07-29T05:06:39.052-07:00 [Info] Timekeeper::repairStream Nothing to repair for Stream MAINT_STREAM and Bucket test4
            

            f. Later indexer received a connection error

            2022-07-29T05:06:54.985-07:00 [Info] Timekeeper::handlePoolChange streamId: MAINT_STREAM, bucket:test4, vbList: 
            [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341]
            

            This was because of a failover

            2022-07-29T05:06:54.490-07:00, failover:0:info:message(ns_1@172.23.100.34) - Failed over ['ns_1@172.23.100.34']: ok
            2022-07-29T05:06:54.618-07:00, failover:0:info:message(ns_1@172.23.100.34) - Deactivating failed over nodes ['ns_1@172.23.100.34']
            2022-07-29T05:06:54.678-07:00, ns_orchestrator:0:info:message(ns_1@172.23.100.34) - Failover completed successfully.
            

            j. Indexer tried to rollback using disk snapshots again

            2022-07-29T05:07:05.218-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 latestSnapInfo SnapshotInfo: count:2762877 committed:false lastRollbackTs <nil>. Use latest snapshot.
            2022-07-29T05:07:05.249-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count:2762877 committed:false
            2022-07-29T05:07:05.254-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877
            

            k. Indexer got rollback

            2022-07-29T05:07:11.277-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 280. Rollback (0,     55caddf927ae).
            2022-07-29T05:07:11.277-07:00 [Info] Timekeeper::handleStreamBegin start repairStream. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 280 Vbuuid: 94329795848110 Seqno: 0 FirstSnap: false
            ...
            2022-07-29T05:07:13.553-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 13. Sending Init Prepare.
            

            l. Indexer tried to rollback using second disk snapshot

            2022-07-29T05:07:24.183-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 Discarding Already Used Snapshot SnapshotInfo: count:2762877 committed:false. Using Next snapshot SnapshotInfo: count:2762877 committed:false
            2022-07-29T05:07:24.213-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count:2762877 committed:false
            2022-07-29T05:07:24.215-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877 committed:false)
            

            m. Indexer got rollback again

            2022-07-29T05:07:35.307-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 280. Rollback (0,     55caddf927ae).
            2022-07-29T05:07:35.307-07:00 [Info] Timekeeper::handleStreamBegin start repairStream. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 280 Vbuuid: 94329795848110 Seqno: 0 FirstSnap: false
            ...
            2022-07-29T05:07:13.553-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 13. Sending Init Prepare.
            

            n. Indexer used both snapshots and eventually rolled back to zero 

            2022-07-29T05:07:37.490-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 Unable to find a snapshot older than last used Snapshot SnapshotInfo: count:2762877 committed:false. Use nil snapshot.
            2022-07-29T05:07:37.750-07:00 [Info] timekeeper.repairMissingStreamBegin stream MAINT_STREAM done
            2022-07-29T05:07:37.830-07:00 [Info] test4/#primary/Mainstore#10943164515644793993:0 Plasma: Disable page eviction before reaching quota.
            2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Zero
            2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::rollbackAllToZero MAINT_STREAM test4
            

            o. The reason for rollback is because vb:280 has branched to seqno. 0 & vbuuid could not be found in failover table

            2022-07-29T05:07:00.341292-07:00 WARNING 165: (test4) DCP (Producer) eq_dcpq:secidx:proj-test4-MAINT_STREAM_TOPIC_34731e3b41effefed0e0b0f46bd7e5b0-4337353054999753021/2 - (vb:280) Stream request requires rollback to seqno:0 because vBucket UUID not found in failover table, consumer and producer have no common history. Client requested seqnos:{337955,18446744073709551615} snapshot:{337955,337955} uuid:198492124179162
            

            2022-07-29T05:06:52.439431-07:00 INFO (test4) VBucket::setState: transitioning vb:280 with high seqno:337949 from:replica to:active meta:\{"topology":[["ns_1@172.23.106.156",null]]}
            2022-07-29T05:06:52.439463-07:00 INFO (test4) KVBucket::setVBucketState: vb:280 created new failover entry with uuid:255442049597763 and seqno:0
            

            The failover table of vb:280 is as follows:

            vb_280:0:id:                         255442049597763
             vb_280:0:seq:                        0
             vb_280:1:id:                         190810441234889
             vb_280:1:seq:                        0
             vb_280:num_entries:                  2
             vb_280:num_erroneous_entries_erased: 0
            

            Similar pattern is seen on node 106.163 where the bucket test2 rolled back to zero. These patterns are similar to what is seen on MB-53172. Closing this as duplicate

            varun.velamuri Varun Velamuri added a comment - Considering the rollback on node 106.156 a. Indexer received a stream end and that triggered repair 2022-07-29T05:05:46.012-07:00 [Info] TK StreamEnd MAINT_STREAM test4 95 238793007962108 337127 2022-07-29T05:05:46.012-07:00 [Info] Timekeeper::handleStreamEnd RepairStream due to StreamEnd. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 95 Vbuuid: 238793007962108 Seqno: 337127 FirstSnap: false  This is due to memcached exit 2022-07-29T05:05:46.203-07:00, ns_log:0:info:message(ns_1@172.23.100.34) - Service 'memcached' exited with status 137. Restarting. Messages: b. Indexer triggered repair with what ever snapshot it has in memory 2022-07-29T05:05:50.914-07:00 [Info] Timekeeper::sendRestartMsg Received KV Repair Msg For Stream MAINT_STREAM Bucket test4. Attempting Stream Repair. ... 2022-07-29T05:05:50.914-07:00 [Info] Indexer::startBucketStream Stream: MAINT_STREAM Bucket: test4 SessionId 11 RestartTS bucket: test4, vbuckets: 1024 Crc64: 15976588854664318510 snapType FORCE_COMMIT -     {vbno, vbuuid, seqno, snapshot-start, snapshot-end}     {    0     c653a5265be9     363572     363572     363572}     {    1     7b222d71ff95     444385     444385     444385} c. Indexer was asked to rollback to non-zero seqno 2022-07-29T05:06:01.248-07:00 [Info] TK StreamBegin MAINT_STREAM test4 0 218062555339753 363569 11 2022-07-29T05:06:01.248-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 0. Rollback (363569,     c653a5265be9). This was because memcached branched at different point 2022-07-29T05:06:01.221867-07:00 WARNING 218: (test4) DCP (Producer) eq_dcpq:secidx:proj-test4-MAINT_STREAM_TOPIC_34731e3b41effefed0e0b0f46bd7e5b0-2371691353212279640/0 - (vb:0) Stream request requires rollback to seqno:363569 because consumer ahead of producer - producer upper at 363569. Client requested seqnos:\{363572,18446744073709551615} snapshot:\{363572,363572} uuid:218062555339753 2022-07-29T05:05:49.999468-07:00 INFO (test4) VBucket: created vb:0 with state:active initialState:active lastSeqno:363569 persistedRange:\{363569,363569} max_cas:1659096341942894592 uuid:218062555339753 topology:[["ns_1@172.23.100.34","ns_1@172.23.105.37"]] 2022-07-29T05:05:49.999512-07:00 INFO (test4) Warmup::createVBuckets: vb:0 created new failover entry with uuid:147751465700978 and seqno:363569 due to unclean shutdown d. As indexer saw rollback, it triggered repair using disk snapshot 2022-07-29T05:06:32.268-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 11. Sending Init Prepare. ... 2022-07-29T05:06:34.830-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877 committed:false) 2022-07-29T05:06:35.447-07:00 [Info] Indexer::startBucketStream Stream: MAINT_STREAM Bucket: test4 SessionId 12 RestartTS bucket: test4, vbuckets: 1024 Crc64: 17143015797484481331 snapType FORCE_COMMIT -     {vbno, vbuuid, seqno, snapshot-start, snapshot-end}     {    0     c653a5265be9     362284     362284     362284}     {    1     7b222d71ff95     442728     442725     442728}     {    2     f0f9a32bc345     278961     278961     278961} e. This restart went successful  2022-07-29T05:06:39.052-07:00 [Info] Timekeeper::repairStream Nothing to repair for Stream MAINT_STREAM and Bucket test4 f. Later indexer received a connection error 2022-07-29T05:06:54.985-07:00 [Info] Timekeeper::handlePoolChange streamId: MAINT_STREAM, bucket:test4, vbList:  [0 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49 50 51 52 53 54 55 56 57 58 59 60 61 62 63 64 65 66 67 68 69 70 71 72 73 74 75 76 77 78 79 80 81 82 83 84 85 86 87 88 89 90 91 92 93 94 95 96 97 98 99 100 101 102 103 104 105 106 107 108 109 110 111 112 113 114 115 116 117 118 119 120 121 122 123 124 125 126 127 128 129 130 131 132 133 134 135 136 137 138 139 140 141 142 143 144 145 146 147 148 149 150 151 152 153 154 155 156 157 158 159 160 161 162 163 164 165 166 167 168 169 170 171 172 173 174 175 176 177 178 179 180 181 182 183 184 185 186 187 188 189 190 191 192 193 194 195 196 197 198 199 200 201 202 203 204 205 206 207 208 209 210 211 212 213 214 215 216 217 218 219 220 221 222 223 224 225 226 227 228 229 230 231 232 233 234 235 236 237 238 239 240 241 242 243 244 245 246 247 248 249 250 251 252 253 254 255 256 257 258 259 260 261 262 263 264 265 266 267 268 269 270 271 272 273 274 275 276 277 278 279 280 281 282 283 284 285 286 287 288 289 290 291 292 293 294 295 296 297 298 299 300 301 302 303 304 305 306 307 308 309 310 311 312 313 314 315 316 317 318 319 320 321 322 323 324 325 326 327 328 329 330 331 332 333 334 335 336 337 338 339 340 341] This was because of a failover 2022-07-29T05:06:54.490-07:00, failover:0:info:message(ns_1@172.23.100.34) - Failed over ['ns_1@172.23.100.34']: ok 2022-07-29T05:06:54.618-07:00, failover:0:info:message(ns_1@172.23.100.34) - Deactivating failed over nodes ['ns_1@172.23.100.34'] 2022-07-29T05:06:54.678-07:00, ns_orchestrator:0:info:message(ns_1@172.23.100.34) - Failover completed successfully. j. Indexer tried to rollback using disk snapshots again 2022-07-29T05:07:05.218-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 latestSnapInfo SnapshotInfo: count:2762877 committed:false lastRollbackTs <nil>. Use latest snapshot. 2022-07-29T05:07:05.249-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count:2762877 committed:false 2022-07-29T05:07:05.254-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877 k. Indexer got rollback 2022-07-29T05:07:11.277-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 280. Rollback (0,     55caddf927ae). 2022-07-29T05:07:11.277-07:00 [Info] Timekeeper::handleStreamBegin start repairStream. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 280 Vbuuid: 94329795848110 Seqno: 0 FirstSnap: false ... 2022-07-29T05:07:13.553-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 13. Sending Init Prepare. l. Indexer tried to rollback using second disk snapshot 2022-07-29T05:07:24.183-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 Discarding Already Used Snapshot SnapshotInfo: count:2762877 committed:false. Using Next snapshot SnapshotInfo: count:2762877 committed:false 2022-07-29T05:07:24.213-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Snapshot SnapshotInfo: count:2762877 committed:false 2022-07-29T05:07:24.215-07:00 [Info] StorageMgr::openSnapshot IndexInst:10943164515644793993 Partition:0 Attempting to open snapshot (SnapshotInfo: count:2762877 committed:false) m. Indexer got rollback again 2022-07-29T05:07:35.307-07:00 [Warn] Timekeeper::handleStreamBegin StreamBegin rollback for StreamId MAINT_STREAM Bucket test4 vbucket 280. Rollback (0,     55caddf927ae). 2022-07-29T05:07:35.307-07:00 [Info] Timekeeper::handleStreamBegin start repairStream. StreamId MAINT_STREAM MutationMeta Bucket: test4 Vbucket: 280 Vbuuid: 94329795848110 Seqno: 0 FirstSnap: false ... 2022-07-29T05:07:13.553-07:00 [Info] Timekeeper::repairStream need rollback for MAINT_STREAM test4 13. Sending Init Prepare. n. Indexer used both snapshots and eventually rolled back to zero  2022-07-29T05:07:37.490-07:00 [Info] StorageMgr::handleRollback 10943164515644793993 Unable to find a snapshot older than last used Snapshot SnapshotInfo: count:2762877 committed:false. Use nil snapshot. 2022-07-29T05:07:37.750-07:00 [Info] timekeeper.repairMissingStreamBegin stream MAINT_STREAM done 2022-07-29T05:07:37.830-07:00 [Info] test4/#primary/Mainstore#10943164515644793993:0 Plasma: Disable page eviction before reaching quota. 2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::handleRollback Rollback Index: 10943164515644793993 PartitionId: 0 SliceId: 0 To Zero 2022-07-29T05:07:37.832-07:00 [Info] StorageMgr::rollbackAllToZero MAINT_STREAM test4 o. The reason for rollback is because vb:280 has branched to seqno. 0 & vbuuid could not be found in failover table 2022-07-29T05:07:00.341292-07:00 WARNING 165: (test4) DCP (Producer) eq_dcpq:secidx:proj-test4-MAINT_STREAM_TOPIC_34731e3b41effefed0e0b0f46bd7e5b0-4337353054999753021/2 - (vb:280) Stream request requires rollback to seqno:0 because vBucket UUID not found in failover table, consumer and producer have no common history. Client requested seqnos:{337955,18446744073709551615} snapshot:{337955,337955} uuid:198492124179162 2022-07-29T05:06:52.439431-07:00 INFO (test4) VBucket::setState: transitioning vb:280 with high seqno:337949 from:replica to:active meta:\{"topology":[["ns_1@172.23.106.156",null]]} 2022-07-29T05:06:52.439463-07:00 INFO (test4) KVBucket::setVBucketState: vb:280 created new failover entry with uuid:255442049597763 and seqno:0 The failover table of vb:280 is as follows: vb_280:0:id: 255442049597763 vb_280:0:seq: 0 vb_280:1:id: 190810441234889 vb_280:1:seq: 0 vb_280:num_entries: 2 vb_280:num_erroneous_entries_erased: 0 Similar pattern is seen on node 106.163 where the bucket test2 rolled back to zero. These patterns are similar to what is seen on MB-53172 . Closing this as duplicate

            Duplicate of MB-53172

            varun.velamuri Varun Velamuri added a comment - Duplicate of MB-53172

            People

              Balakumaran.Gopal Balakumaran Gopal
              Balakumaran.Gopal Balakumaran Gopal
              Votes:
              0 Vote for this issue
              Watchers:
              5 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty