Details
-
Bug
-
Resolution: Duplicate
-
Major
-
7.1.0
-
7.1.0-1029-enterprise
-
Untriaged
-
Centos 64-bit
-
1
-
No
-
Magma-July-19-2021
Description
Script to Repro
guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops.ini rerun=False,get-cbcollect-info=True,quota_percent=95,crash_warning=True,bucket_storage=magma,enable_dp=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery,nodes_init=5,nodes_failover=2,recovery_type=full,bucket_spec=multi_bucket.buckets_for_rebalance_tests,data_load_stage=during,skip_validations=False,GROUP=P0_failover_and_recovery'
|
Steps to Repro
1. Create a 5 node cluster.
2021-06-20 12:53:17,558 | test | INFO | pool-14-thread-6 | [table_view:display:72] Rebalance Overview
----------------------------------------------------------------------
Nodes | Services | Version | CPU | Status |
----------------------------------------------------------------------
172.23.98.196 | kv | 7.1.0-1029-enterprise | 1.03378719112 | Cluster node |
172.23.98.195 | None | <--- IN — | ||
172.23.121.10 | None | <--- IN — | ||
172.23.104.186 | None | <--- IN — | ||
172.23.120.201 | None | <--- IN — |
----------------------------------------------------------------------
2. Create bucket/scope/collections/data
2021-06-20 12:57:28,921 | test | INFO | MainThread | [table_view:display:72] Bucket statistics
-------------------------------------------------------------------------------------
Bucket | Type | Replicas | Durability | TTL | Items | RAM Quota | RAM Used | Disk Used |
-------------------------------------------------------------------------------------
TgO1z-27-919000 | couchbase | 2 | none | 0 | 3000000 | 10485760000 | 2579109024 | 8037956876 |
-------------------------------------------------------------------------------------
3. Do a graceful failover of 2 nodes(172.23.104.186 and 172.23.120.201).
2021-06-20 12:57:36,743 | test | INFO | MainThread | [collections_rebalance:rebalance_operation:388] Starting rebalance operation of type : graceful_failover_recovery
|
2021-06-20 12:57:36,743 | test | INFO | MainThread | [collections_rebalance:rebalance_operation:679] failing over nodes [ip:172.23.104.186 port:8091 ssh_username:root, ip:172.23.120.201 port:8091 ssh_username:root]
|
|
2021-06-20 13:03:49,029 | test | WARNING | MainThread | [rest_client:get_nodes:1756] 172.23.104.186 - Node not part of cluster inactiveFailed
|
2021-06-20 13:03:49,029 | test | WARNING | MainThread | [rest_client:get_nodes:1756] 172.23.120.201 - Node not part of cluster inactiveFailed
|
4. Do a full recovery of both the nodes + rebalance.
2021-06-20 13:04:50,586 | test | INFO | pool-14-thread-12 | [table_view:display:72] Rebalance Overview
-----------------------------------------------------------------------
Nodes | Services | Version | CPU | Status |
-----------------------------------------------------------------------
172.23.98.196 | kv | 7.1.0-1029-enterprise | 9.11630269172 | Cluster node |
172.23.98.195 | kv | 7.1.0-1029-enterprise | 8.6032388664 | Cluster node |
172.23.104.186 | kv | 7.1.0-1029-enterprise | 0.978670012547 | Cluster node |
172.23.120.201 | kv | 7.1.0-1029-enterprise | 0.652446675031 | Cluster node |
172.23.121.10 | kv | 7.1.0-1029-enterprise | 68.0910240202 | Cluster node |
-----------------------------------------------------------------------
Rebalance fails as shown below
2021-06-20 13:08:02,947 | test | ERROR | pool-14-thread-12 | [rest_client:_rebalance_status_and_progress:1548] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'b6cfe0428ab0e839c75d6bfdb66b9db0', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=7215b92b5b18997d7995d0ec3c471f11', u'status': u'notRunning'} - rebalance failed
|
2021-06-20 13:08:02,996 | test | INFO | pool-14-thread-12 | [rest_client:print_UI_logs:2694] Latest logs from UI on 172.23.98.196:
|
2021-06-20 13:08:02,996 | test | ERROR | pool-14-thread-12 | [rest_client:print_UI_logs:2696] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.98.196', u'tstamp': 1624219677478L, u'shortText': u'message', u'serverTime': u'2021-06-20T13:07:57.478Z', u'text': u'Rebalance exited with reason bad_replicas.\nRebalance Operation Id = 876afb4ba3c08274d0c4c6786036d1e2'}
|
2021-06-20 13:08:02,997 | test | ERROR | pool-14-thread-12 | [rest_client:print_UI_logs:2696] {u'code': 2, u'module': u'ns_rebalancer', u'type': u'info', u'node': u'ns_1@172.23.98.196', u'tstamp': 1624219677475L, u'shortText': u'message', u'serverTime': u'2021-06-20T13:07:57.475Z', u'text': u"Bad replicators after rebalance:\nMissing = [{'ns_1@172.23.104.186','ns_1@172.23.98.196',92}]\nExtras = []"}
|
cbcollect_info attached. Noticed the same failure on 4-5 tests on the weekly run.
Attachments
Issue Links
- is caused by
-
MB-47106 [Magma] - Non-negative counter exception in setBackfillRemaining_UNLOCKED
- Closed