guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /data/workspace/debian-p0-durability-vset00-00-auto_fail_over_6.5_P0_persist_active/testexec.874.ini durability=MAJORITY_AND_PERSIST_TO_ACTIVE,num_items=50000,GROUP=P0;durability,get-cbcollect-info=True,upgrade_version=7.6.0-1525,sirius_url=http://172.23.120.103:4000 -t failover.AutoFailoverTests.AutoFailoverTests.test_autofailover,timeout=5,num_node_failures=1,nodes_init=4,failover_action=restart_network,nodes_init=3,replicas=2,GROUP=P0;durability;luks' Test Input params: {'conf_file': 'conf/failover/py-autofailover.conf', 'upgrade_version': '7.6.0-1525', 'timeout': '5', 'spec': 'py-autofailover', 'num_nodes': 7, 'rerun': False, 'GROUP': 'P0;durability', 'case_number': 6, 'cluster_name': 'testexec.874', 'ini': '/data/workspace/debian-p0-durability-vset00-00-auto_fail_over_6.5_P0_persist_active/testexec.874.ini', 'get-cbcollect-info': 'True', 'replicas': '2', 'durability': 'MAJORITY_AND_PERSIST_TO_ACTIVE', 'failover_action': 'restart_network', 'logs_folder': '/data/workspace/debian-p0-durability-vset00-00-auto_fail_over_6.5_P0_persist_active/logs/testrunner-23-Sep-20_00-26-33/test_6', 'nodes_init': '3', 'num_items': '50000', 'sirius_url': 'http://172.23.120.103:4000', 'num_node_failures': '1'} test_autofailover (failover.AutoFailoverTests.AutoFailoverTests) ... 2023-09-20 01:46:47,282 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= AutoFailoverTests setup started for test #6 test_autofailover ========= 2023-09-20 01:46:50,437 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= OnPremBaseTest setup started for test #6 test_autofailover ========= 2023-09-20 01:46:50,437 | test | INFO | MainThread | [onPrem_basetestcase:setUp:245] Delete all buckets and rebalance out other nodes from 'C1' 2023-09-20 01:47:07,016 | test | INFO | MainThread | [rest_client:monitorRebalance:1339] Rebalance done. Taken 3.33999991417 seconds to complete 2023-09-20 01:47:07,016 | test | INFO | MainThread | [common_lib:sleep:20] Sleep 10 seconds. Reason: Wait after rebalance complete 2023-09-20 01:47:17,832 | test | INFO | MainThread | [onPrem_basetestcase:initialize_cluster:396] Initializing cluster : C1 2023-09-20 01:47:48,157 | infra | CRITICAL | pool-11-thread-29 | [task:call:7193] mcdMemoryReserved reported in nodes/self is: 3118 2023-09-20 01:47:48,177 | test | CRITICAL | pool-11-thread-29 | [table_view:display:72] Memory quota allocated: +---------------------+---------+ | Service | RAM MiB | +---------------------+---------+ | eventingMemoryQuota | 256 | | cbasMemoryQuota | 1024 | | indexMemoryQuota | 256 | | ftsMemoryQuota | 256 | | memoryQuota | 3018 | +---------------------+---------+ 2023-09-20 01:47:49,744 | test | WARNING | MainThread | [onPrem_basetestcase:_initialize_nodes:707] RAM quota was defined less than 100 MB: 2023-09-20 01:48:01,108 | test | INFO | MainThread | [onPrem_basetestcase:initialize_cluster:452] Cluster C1 initialized 2023-09-20 01:48:03,418 | test | INFO | MainThread | [onPrem_basetestcase:enable_tls_on_nodes:510] Validating if services obey tls only on servers [ip:172.23.97.200 port:8091 ssh_username:root, ip:172.23.97.199 port:8091 ssh_username:root, ip:172.23.121.117 port:8091 ssh_username:root, ip:172.23.104.231 port:8091 ssh_username:root, ip:172.23.105.168 port:8091 ssh_username:root, ip:172.23.107.43 port:8091 ssh_username:root, ip:172.23.121.255 port:8091 ssh_username:root] 2023-09-20 01:48:03,835 | test | INFO | MainThread | [common_lib:sleep:20] Sleep 120 seconds. Reason: waiting after enabling TLS 2023-09-20 01:50:04,190 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= OnPremBaseTest setup finished for test #6 test_autofailover ========= 2023-09-20 01:50:04,477 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= OnPremBaseTest setup finished for test #6 test_autofailover ========= 2023-09-20 01:50:04,479 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= ClusterSetup setup started for test #6 test_autofailover ========= 2023-09-20 01:50:21,890 | test | CRITICAL | pool-11-thread-7 | [cluster_ready_functions:validate_orchestrator_selection:292] Orchestrator: 172.23.97.200 2023-09-20 01:50:22,244 | test | INFO | pool-11-thread-7 | [table_view:display:72] Rebalance Overview +----------------+---------+----------+---------------------------------+----------------+--------------+-----------------------+ | Nodes | Zone | Services | Version / Config | CPU | Status | Membership / Recovery | +----------------+---------+----------+---------------------------------+----------------+--------------+-----------------------+ | 172.23.97.200 | Group 1 | kv | 7.6.0-1525-enterprise / default | 0.478227038883 | Cluster node | active / none | | 172.23.97.199 | None | kv | | | <--- IN --- | | | 172.23.121.117 | None | kv | | | <--- IN --- | | +----------------+---------+----------+---------------------------------+----------------+--------------+-----------------------+ 2023-09-20 01:50:27,674 | test | INFO | pool-11-thread-7 | [task:check:832] Rebalance - status: none, progress: 100 2023-09-20 01:50:27,822 | test | INFO | pool-11-thread-7 | [task:check:891] Rebalance completed with progress: 100% in 5.58299994469 sec 2023-09-20 01:50:28,417 | test | CRITICAL | pool-11-thread-7 | [cluster_ready_functions:validate_orchestrator_selection:292] Orchestrator: 172.23.97.200 2023-09-20 01:50:28,417 | infra | CRITICAL | pool-11-thread-7 | [task:print_nodes:622] Cluster nodes..: ['172.23.121.117:18091', '172.23.97.200:18091', '172.23.97.199:18091'] 2023-09-20 01:50:28,417 | infra | CRITICAL | pool-11-thread-7 | [task:print_nodes:622] KV............: ['172.23.121.117:18091', '172.23.97.200:18091', '172.23.97.199:18091'] 2023-09-20 01:50:29,838 | test | INFO | MainThread | [table_view:display:72] Cluster statistics +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ | Nodes | Zone | Services | CPU | Mem_total | Mem_free | Swap_mem_used | Active / Replica | Version / Config | +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ | 172.23.97.199 | Group 1 | kv | 0.4483 | 3.81 GiB | 3.01 GiB | 13.25 MiB / 976.00 MiB | 0 / 0 | 7.6.0-1525-enterprise / default | | 172.23.97.200 | Group 1 | kv | 3.6319 | 3.81 GiB | 2.96 GiB | 21.13 MiB / 976.00 MiB | 0 / 0 | 7.6.0-1525-enterprise / default | | 172.23.121.117 | Group 1 | kv | 0.4005 | 3.81 GiB | 3.03 GiB | 13.50 MiB / 976.00 MiB | 0 / 0 | 7.6.0-1525-enterprise / default | +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ 2023-09-20 01:50:29,839 | test | INFO | MainThread | [cb_basetest:log_setup_status:212] ========= ClusterSetup setup complete for test #6 test_autofailover ========= 2023-09-20 01:50:34,783 | test | INFO | MainThread | [common_lib:sleep:20] Sleep 5 seconds. Reason: Wait for bucket to accept SDK connections 2023-09-20 01:52:22,374 | test | INFO | pool-11-thread-3 | [table_view:display:72] Ops trend for bucket 'default' +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ | Min | Trend | Max | +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ | 0.000 | *..******.............................................................**...............................******......*X | 2295.500 | +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ 2023-09-20 01:52:23,696 | test | INFO | MainThread | [table_view:display:72] Cluster statistics +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ | Nodes | Zone | Services | CPU | Mem_total | Mem_free | Swap_mem_used | Active / Replica | Version / Config | +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ | 172.23.97.199 | Group 1 | kv | 55.057 | 3.81 GiB | 2.76 GiB | 13.25 MiB / 976.00 MiB | 14451 / 28901 | 7.6.0-1525-enterprise / default | | 172.23.97.200 | Group 1 | kv | 56.081 | 3.81 GiB | 2.71 GiB | 21.13 MiB / 976.00 MiB | 8401 / 16831 | 7.6.0-1525-enterprise / default | | 172.23.121.117 | Group 1 | kv | 44.068 | 3.81 GiB | 2.79 GiB | 13.50 MiB / 976.00 MiB | 14514 / 29055 | 7.6.0-1525-enterprise / default | +----------------+---------+----------+--------+-----------+----------+------------------------+-------------------+---------------------------------+ 2023-09-20 01:52:24,424 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | Bucket | Type | Storage | Replicas | Durability | TTL | Items | Vbuckets | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | default | couchbase | magma | 2 | none | 0 | 37366 | 1024 | 8.84 GiB | 449.49 MiB | 392.96 MiB | 100 | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ 2023-09-20 01:52:29,726 | test | INFO | MainThread | [common_lib:sleep:20] Sleep 5 seconds. Reason: Wait after enabling auto-failover/auto-reprovision 2023-09-20 01:53:54,980 | test | INFO | MainThread | [AutoFailoverTests:test_autofailover:125] Inducing failure restart_network on nodes [ip:172.23.97.199 port:18091 ssh_username:root] 2023-09-20 01:54:26,513 | test | ERROR | pool-11-thread-10 | [task:call:2211] Failed to load 13 docs from 50000 to 56250 2023-09-20 01:54:26,513 | test | ERROR | pool-11-thread-10 | [task:call:2211] Failed to load 9 docs from 68750 to 75000 2023-09-20 01:54:26,697 | test | ERROR | pool-11-thread-10 | [task:call:2211] Failed to load 9 docs from 75000 to 81250 2023-09-20 01:54:26,698 | test | ERROR | pool-11-thread-10 | [task:call:2211] Failed to load 14 docs from 93750 to 100000 2023-09-20 01:54:27,302 | test | INFO | pool-11-thread-18 | [table_view:display:72] Ops trend for bucket 'default' +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ | Min | Trend | Max | +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ | 0.000 | ***......................**................******...................................**...........................***X | 2196.100 | +-------+-----------------------------------------------------------------------------------------------------------------------+----------+ 2023-09-20 01:54:45,700 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Killed old client process 2023-09-20 01:54:45,700 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Internet Systems Consortium DHCP Client 4.4.1 2023-09-20 01:54:45,701 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Copyright 2004-2018 Internet Systems Consortium. 2023-09-20 01:54:45,703 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] All rights reserved. 2023-09-20 01:54:45,703 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] For info, please visit https://www.isc.org/software/dhcp/ 2023-09-20 01:54:45,703 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] 2023-09-20 01:54:45,703 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Listening on LPF/eth0/d2:9a:d6:01:72:40 2023-09-20 01:54:45,703 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Sending on LPF/eth0/d2:9a:d6:01:72:40 2023-09-20 01:54:45,704 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Sending on Socket/fallback 2023-09-20 01:54:45,704 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] DHCPRELEASE of 172.23.97.199 on eth0 to 172.23.201.5 port 67 2023-09-20 01:54:45,706 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Internet Systems Consortium DHCP Client 4.4.1 2023-09-20 01:54:45,706 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Copyright 2004-2018 Internet Systems Consortium. 2023-09-20 01:54:45,707 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] All rights reserved. 2023-09-20 01:54:45,707 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] For info, please visit https://www.isc.org/software/dhcp/ 2023-09-20 01:54:45,707 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] 2023-09-20 01:54:45,707 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Listening on LPF/eth0/d2:9a:d6:01:72:40 2023-09-20 01:54:45,707 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Sending on LPF/eth0/d2:9a:d6:01:72:40 2023-09-20 01:54:45,709 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] Sending on Socket/fallback 2023-09-20 01:54:45,709 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] DHCPDISCOVER on eth0 to 255.255.255.255 port 67 interval 4 2023-09-20 01:54:45,709 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] DHCPOFFER of 172.23.97.199 from 172.23.96.2 2023-09-20 01:54:45,710 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] DHCPREQUEST for 172.23.97.199 on eth0 to 255.255.255.255 port 67 2023-09-20 01:54:45,710 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] DHCPACK of 172.23.97.199 from 172.23.96.2 2023-09-20 01:54:45,710 | infra | ERROR | pool-11-thread-13 | [remote_util:log_command_output:2957] bound to 172.23.97.199 -- renewal in 310050 seconds. 2023-09-20 01:54:46,145 | test | INFO | pool-11-thread-13 | [rest_client:print_UI_logs:2602] Latest logs from UI on 172.23.97.200: 2023-09-20 01:54:46,147 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'auto_failover', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200048408L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:08.408Z', u'text': u"Node ('ns_1@172.23.97.199') was automatically failed over. Reason: All monitors report node is unhealthy."} 2023-09-20 01:54:46,148 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200048358L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:08.358Z', u'text': u'Failover completed successfully.\nRebalance Operation Id = 42de3e8bf56f99899545dc82db92b0b8'} 2023-09-20 01:54:46,148 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'failover', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200048232L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:08.232Z', u'text': u"Deactivating failed over nodes ['ns_1@172.23.97.199']"} 2023-09-20 01:54:46,148 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'failover', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200046228L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:06.228Z', u'text': u"Failed over ['ns_1@172.23.97.199']: ok"} 2023-09-20 01:54:46,148 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200045677L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:05.677Z', u'text': u"Starting failover of nodes ['ns_1@172.23.97.199'] AllowUnsafe = false Operation Id = 42de3e8bf56f99899545dc82db92b0b8"} 2023-09-20 01:54:46,150 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'failover', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695200045677L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:54:05.677Z', u'text': u"Starting failing over ['ns_1@172.23.97.199']"} 2023-09-20 01:54:46,150 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'auto_failover', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695199944567L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:52:24.567Z', u'text': u'Enabled auto-failover with timeout 5 and max count 1'} 2023-09-20 01:54:46,150 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 4, u'module': u'ns_node_disco', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695199892717L, u'shortText': u'node up', u'serverTime': u'2023-09-20T01:51:32.717Z', u'text': u"Node 'ns_1@172.23.97.200' saw that node 'ns_1@172.23.121.117' came up. Tags: [] (repeated 1 times, last seen 73.73992 secs ago)"} 2023-09-20 01:54:46,151 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 0, u'module': u'memcached_config_mgr', u'type': u'info', u'node': u'ns_1@172.23.97.199', u'tstamp': 1695199871135L, u'shortText': u'message', u'serverTime': u'2023-09-20T01:51:11.135Z', u'text': u'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>] (repeated 1 times, last seen 52.96311 secs ago)'} 2023-09-20 01:54:46,151 | test | ERROR | pool-11-thread-13 | [rest_client:print_UI_logs:2604] {u'code': 4, u'module': u'ns_node_disco', u'type': u'info', u'node': u'ns_1@172.23.97.200', u'tstamp': 1695199832716L, u'shortText': u'node up', u'serverTime': u'2023-09-20T01:50:32.716Z', u'text': u"Node 'ns_1@172.23.97.200' saw that node 'ns_1@172.23.97.199' came up. Tags: [] (repeated 1 times, last seen 23.747585 secs ago)"} 2023-09-20 01:54:46,151 | test | ERROR | pool-11-thread-13 | [task:check:5577] Autofailover of node 172.23.97.199 was not initiated after the expected timeout period of 5 2023-09-20 01:54:46,153 | test | INFO | MainThread | [common_lib:sleep:20] Sleep 300 seconds. Reason: Wait after inducing failure 2023-09-20 01:59:47,446 | test | WARNING | pool-11-thread-22 | [rest_client:get_nodes:1667] 172.23.97.199 - Node not part of cluster inactiveFailed 2023-09-20 01:59:47,447 | test | CRITICAL | pool-11-thread-22 | [cluster_ready_functions:validate_orchestrator_selection:292] Orchestrator: 172.23.97.199 2023-09-20 01:59:47,447 | test | CRITICAL | pool-11-thread-22 | [cluster_ready_functions:validate_orchestrator_selection:294] Unexpected orchestrator. Expected orchestrators: ['172.23.121.117', '172.23.97.200'] FAIL 2023-09-20 01:59:47,463 | test | INFO | MainThread | [AutoFailoverTests:tearDown:36] Printing bucket stats before teardown 2023-09-20 01:59:48,243 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | Bucket | Type | Storage | Replicas | Durability | TTL | Items | Vbuckets | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | default | couchbase | magma | 2 | none | 0 | 99963 | 1024 | 5.89 GiB | 273.05 MiB | 181.15 MiB | 100 | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ 2023-09-20 01:59:48,244 | test | INFO | MainThread | [AutoFailoverBaseTest:tearDown:145] ============AutoFailoverBaseTest teardown============ 2023-09-20 01:59:48,984 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | Bucket | Type | Storage | Replicas | Durability | TTL | Items | Vbuckets | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ | default | couchbase | magma | 2 | none | 0 | 99963 | 1024 | 5.89 GiB | 273.05 MiB | 181.15 MiB | 100 | +---------+-----------+---------+----------+------------+-----+-------+----------+-----------+------------+------------+-----+ 2023-09-20 02:00:00,036 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:00,040 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:00,229 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:00,233 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:00,801 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:00,808 | infra | ERROR | pool-11-thread-25 | [remote_util:log_command_output:2957] bash: line 1: /sbin/iptables: No such file or directory 2023-09-20 02:00:18,332 | test | INFO | MainThread | [cluster_ready_functions:trigger_cb_collect_on_cluster:1421] Running cbcollect on node ns_1@172.23.121.117,ns_1@172.23.97.199,ns_1@172.23.97.200 2023-09-20 02:00:28,378 | test | INFO | MainThread | [cluster_ready_functions:trigger_cb_collect_on_cluster:1424] ns_1@172.23.121.117,ns_1@172.23.97.199,ns_1@172.23.97.200 - cbcollect status: True 2023-09-20 02:00:28,380 | test | INFO | MainThread | [cluster_ready_functions:wait_for_cb_collect_to_complete:1429] Polling active_tasks to check cbcollect status 2023-09-20 02:03:14,295 | test | INFO | MainThread | [cluster_ready_functions:copy_cb_collect_logs:1463] ns_1@172.23.121.117: Copying cbcollect ZIP file to Client 2023-09-20 02:03:14,864 | test | INFO | MainThread | [cluster_ready_functions:copy_cb_collect_logs:1463] ns_1@172.23.97.199: Copying cbcollect ZIP file to Client 2023-09-20 02:03:15,772 | test | INFO | MainThread | [cluster_ready_functions:copy_cb_collect_logs:1463] ns_1@172.23.97.200: Copying cbcollect ZIP file to Client 2023-09-20 02:03:16,345 | test | CRITICAL | MainThread | [onPrem_basetestcase:tearDownEverything:625] Skipping get_trace !! 2023-09-20 02:03:16,469 | test | WARNING | MainThread | [onPrem_basetestcase:tearDownEverything:630] Alerts found: [{u'msg': u"Unable to listen on 'ns_1@172.23.97.199'. (POSIX error code: 'eaddrnotavail')", u'disableUIPopUp': False, u'serverTime': u'2023-09-20T01:54:11.000Z'}] ====================================================================== FAIL: test_autofailover (failover.AutoFailoverTests.AutoFailoverTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/failover/AutoFailoverTests.py", line 141, in test_autofailover self.assertTrue(rebalance.result, "Rebalance Failed") AssertionError: Rebalance Failed ---------------------------------------------------------------------- Ran 1 test in 989.483s During the test, Remote Connections: 71, Disconnections: 68 SDK Connections: 16, Disconnections: 16 !!!!!! CRITICAL :: Shell disconnection mismatch !!!!! FAILED (failures=1) summary so far suite failover.AutoFailoverTests.AutoFailoverTests , pass 5 , fail 1