Details
-
Bug
-
Resolution: Duplicate
-
Critical
-
Cheshire-Cat
-
6.6.2-9588 ----> 7.0.0-4979
-
Untriaged
-
Centos 64-bit
-
1
-
No
Description
Steps to Repro
1. Start a 6.6.2 system test longevity run.
2. It has following cluster setup
- * 9 data nodes
- * 3 analytics nodes
- * 3 eventing nodes
- * 4 indexing nodes
- * 3 search nodes
- * 3 query nodes
3. It has 10 buckets, fts indexes, analytics datasets, 2i indexes, eventing functions.
4. We do a swap rebalance of 6 node(1 data, 1 index, 1 analytics, 1 fts, 1 query, 1 eventing) which has 6.6.2-9588 with 7.0.0-4979. This woks fine.
5. Failover one fts node 6.6.2-9588 - 172.23.106.207
6. Failover one n1ql node 6.6.2-9588 - 172.23.106.191
7. Now try to graceful failover one 6.6.2-9588 - 172.23.105.90
This fails with following error. See rebalance report - rebalanceReport.json
172.23.104.244 - 7:32:48 AM 19 Apr, 2021
Graceful failover exited with reason {mover_crashed,
|
{unexpected_exit,
|
{'EXIT',<0.16337.225>,
|
{failed_to_update_vbucket_map,
|
"WAREHOUSE",890,
|
{error,
|
[{'ns_1@172.23.106.225',
|
timeout}]}}}}}.
|
Rebalance Operation Id = b32512cbb7f83c6d604f696530dc38fa
|
cbcollect_info attached. This is the first time we are running this test. It is an essentially an upgrade of the system test cluster.
Attachments
Issue Links
- is duplicated by
-
MB-45769 Rebalance repeatedly fails during upgrade with Rebalance exited with reason {pre_rebalance_janitor_run_failed,"DISTRICT", {error, {config_sync_failed,push,
- Closed