Details
-
Bug
-
Resolution: Duplicate
-
Major
-
Cheshire-Cat
-
Couchbase EE 7.0.0-4007
-
Triaged
-
Centos 64-bit
-
-
1
-
No
Description
Script to Repo
./testrunner -i /tmp/durability_volume.ini rerun=False -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out,nodes_init=3,nodes_failover=1,override_spec_params=durability;replicas,durability=MAJORITY,replicas=2,bucket_spec=dgm.buckets_for_rebalance_tests,data_load_stage=during,dgm_test=True,dgm=45,skip_validations=False,GROUP=durability_majority_dgm |
Steps to Repro
1. Create a 3 node cluster
2020-12-15 05:00:10,670 | test | INFO | pool-8-thread-7 | [table_view:display:72] Rebalance Overview
-----------------------++-------------
Nodes | Services | Status |
-----------------------++-------------
172.23.105.215 | kv | Cluster node |
172.23.105.217 | None | <--- IN — |
172.23.105.219 | None | <--- IN — |
-----------------------++-------------
2. Create bucket and initial data load
2020-12-15 05:04:52,933 | test | INFO | MainThread | [table_view:display:72] Bucket statistics
-----------------+---------------------------------------------------+----------
Bucket | Type | Replicas | Durability | TTL | Items | RAM Quota | RAM Used | Disk Used |
-----------------+---------------------------------------------------+----------
default | couchbase | 2 | none | 0 | 600000 | 943718400 | 405535312 | 822582812 |
-----------------+---------------------------------------------------+----------
3. Load bucket into dgm
2020-12-15 05:06:13,308 | test | INFO | pool-8-thread-2 | [task:_load_bucket_into_dgm:1981] Active DGM 91.2001263007% Replica DGM 44.0989590839% achieved for 'default'. Loaded docs: 920000
4. Graceful failover and rebalance-out a node
-----------------------++-------------
Nodes | Services | Status |
-----------------------++-------------
172.23.105.215 | kv | Cluster node |
172.23.105.217 | kv | Cluster node |
172.23.105.219 | [u'kv'] | — OUT ---> |
-----------------------++-------------
Works as expected.
But test fails as we see these CRITICAL messages from memcached.log
2020-12-15 05:09:57,618 | test | CRITICAL | MainThread | [basetestcase:check_coredump_exist:784] 172.23.105.215: Found 'CRITICAL' logs - 2020-12-15T05:09:53.908869-08:00 CRITICAL (default) CouchKVStore::maybePatchOnDiskPrepares(): According to _local/vbstate for vb:45 there should be 0 prepares, but we just purged 571 |
2020-12-15T05:09:53.966756-08:00 CRITICAL (default) CouchKVStore::maybePatchOnDiskPrepares(): According to _local/vbstate for vb:43 there should be 0 prepares, but we just purged 582 |
2020-12-15T05:09:53.991108-08:00 CRITICAL (default) CouchKVStore::maybePatchOnDiskPrepares(): According to _local/vbstate for vb:48 there should be 0 prepares, but we just purged 636 |
2020-12-15T05:09:54.039472-08:00 CRITICAL (default) CouchKVStore::maybePatchOnDiskPrepares(): According to _local/vbstate for vb:55 there should be 0 prepares, but we just purged 570 |
2020-12-15T05:09:55.190911-08:00 CRITICAL (default) CouchKVStore::maybePatchOnDiskPrepares(): According to _local/vbstate for vb:401 there should be 0 prepares, but we just purged 494 |
Attachments
Issue Links
- duplicates
-
MB-43403 Mismatch in on-disk prepares before / after compaction
- Closed