cluster details
172.23.136.136 - data, backup
|
172.23.136.187 - data, index
|
172.23.136.189 - data, query
|
172.23.136.190 - data, search
|
steps
1. create a 4 node cluster
2. enable autofailover with timeout 30 and maxCount=1
3. bring down couchbase service for 3 nodes 172.23.136.136 , 172.23.136.189, 172.23.136.190
4. wait for timeout period and bring nodes backup
timestamp
[ns_server:info,2024-02-06T21:52:18.461-08:00,ns_1@172.23.136.187:<0.28357.1>:leader_lease_acquire_worker:handle_fresh_lease_acquired:296]Acquired lease from node 'ns_1@172.23.136.136' (lease uuid: <<"1012cdcf11726c7014912a9cabb41314">>)
|
[ns_server:warn,2024-02-06T21:52:19.826-08:00,ns_1@172.23.136.187:<0.24825.0>:tombstone_purger:check:66]Tombstone purge failed {error,no_quorum}
|
[user:info,2024-02-06T21:52:20.030-08:00,ns_1@172.23.136.187:<0.28272.1>:failover:orchestrate:163]Starting failing over ['ns_1@172.23.136.136']
|
[ns_server:info,2024-02-06T21:52:20.030-08:00,ns_1@172.23.136.187:<0.28272.1>:failover:pre_failover_config_sync:210]Going to sync with chronicle quorum
|
[user:info,2024-02-06T21:52:20.030-08:00,ns_1@172.23.136.187:<0.24820.0>:ns_orchestrator:handle_start_failover:1861]Starting failover of nodes ['ns_1@172.23.136.136'] AllowUnsafe = false Operation Id = 6ff817a2e651ab432bb913974cf37630
|
[ns_server:info,2024-02-06T21:52:20.030-08:00,ns_1@172.23.136.187:mb_master<0.6190.0>:mb_master:master:436]Surrendering mastership to 'ns_1@172.23.136.189'
|