Started by user Balakumaran Gopal Rebuilds build #271 Running as SYSTEM [EnvInject] - Loading node environment variables. Building remotely on magma-sd2501-30g-20c (magma_slave magma_sdk2) in workspace /data/workspace/temp_rebalance_magma [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Deferred wipeout is used... [WS-CLEANUP] Done No credentials specified Cloning the remote Git repository Cloning repository https://github.com/bkumaran/TAF/ > /usr/bin/git init /data/workspace/temp_rebalance_magma # timeout=10 Fetching upstream changes from https://github.com/bkumaran/TAF/ > /usr/bin/git --version # timeout=10 > /usr/bin/git fetch --tags --progress https://github.com/bkumaran/TAF/ +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config remote.origin.url https://github.com/bkumaran/TAF/ # timeout=10 > /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config remote.origin.url https://github.com/bkumaran/TAF/ # timeout=10 Fetching upstream changes from https://github.com/bkumaran/TAF/ > /usr/bin/git fetch --tags --progress https://github.com/bkumaran/TAF/ +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > /usr/bin/git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 7333ce2b90e00eed23dd444ebea08bf4fd9e48d2 (refs/remotes/origin/master) > /usr/bin/git config core.sparsecheckout # timeout=10 > /usr/bin/git checkout -f 7333ce2b90e00eed23dd444ebea08bf4fd9e48d2 # timeout=10 Commit message: "Update collections_rebalance.py" > /usr/bin/git rev-list --no-walk 7333ce2b90e00eed23dd444ebea08bf4fd9e48d2 # timeout=10 [temp_rebalance_magma] $ /bin/sh -xe /tmp/jenkins3429006937320723914.sh + echo '[global] username:root password:couchbase port:8091 n1ql_port:8093 index_port:9102 data_path:/data [membase] rest_username:Administrator rest_password:password [servers] 1:_1 2:_2 3:_3 4:_4 5:_5 6:_6 7:_7 [_1] ip:172.23.105.164 [_2] ip:172.23.105.206 [_3] ip:172.23.106.177 [_4] ip:172.23.100.34 [_5] ip:172.23.100.35 [_6] ip:172.23.100.36 [_7] ip:172.23.100.37' + git checkout master Switched to a new branch 'master' Branch master set up to track remote branch master from origin. + git pull origin master From https://github.com/bkumaran/TAF * branch master -> FETCH_HEAD Already up-to-date. + git clone https://github.com/sumedhpb/guides.git Cloning into 'guides'... + jython_path=/opt/jython/bin/jython + /opt/jython/bin/pip install -r requirements.txt Requirement already satisfied (use --upgrade to upgrade): futures==3.3.0 in /opt/jython/Lib/site-packages (from -r requirements.txt (line 1)) Requirement already satisfied (use --upgrade to upgrade): requests==2.24.0 in /opt/jython/Lib/site-packages (from -r requirements.txt (line 2)) Requirement already satisfied (use --upgrade to upgrade): urllib3==1.25.10 in /opt/jython/Lib/site-packages (from -r requirements.txt (line 3)) Requirement already satisfied (use --upgrade to upgrade): ruamel.yaml==0.16.12 in /opt/jython/Lib/site-packages (from -r requirements.txt (line 4)) Requirement already satisfied (use --upgrade to upgrade): six==1.15.0 in /opt/jython/Lib/site-packages (from -r requirements.txt (line 5)) Cleaning up... + sdk_client_params='-P transaction_version=1.1.8 -P client_version=3.1.6' + mkdir tr_for_install + cd tr_for_install + git clone https://github.com/couchbase/testrunner.git Cloning into 'testrunner'... + cd testrunner + git checkout master Already on 'master' + git pull origin master From https://github.com/couchbase/testrunner * branch master -> FETCH_HEAD Already up-to-date. + py_executable=python + 7.1.0-2385 + grep 7 /tmp/jenkins3429006937320723914.sh: line 65: 7.1.0-2385: command not found + python3 scripts/new_install.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p timeout=1200,get-cbcollect-info=True,version=7.1.0-2385,product=cb,debug_logs=False,url= 2022-02-25 06:11:55,301 - root - WARNING - URL: is not valid, will use version to locate build 2022-02-25 06:11:55,302 - root - INFO - SSH Connecting to 172.23.105.164 with username:root, attempt#1 of 5 2022-02-25 06:11:55,410 - root - INFO - SSH Connected to 172.23.105.164 as root 2022-02-25 06:11:55,715 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:55,989 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:55,990 - root - INFO - SSH Connecting to 172.23.105.206 with username:root, attempt#1 of 5 2022-02-25 06:11:56,097 - root - INFO - SSH Connected to 172.23.105.206 as root 2022-02-25 06:11:56,385 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:56,638 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:56,639 - root - INFO - SSH Connecting to 172.23.106.177 with username:root, attempt#1 of 5 2022-02-25 06:11:56,745 - root - INFO - SSH Connected to 172.23.106.177 as root 2022-02-25 06:11:57,014 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:57,295 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:57,296 - root - INFO - SSH Connecting to 172.23.100.34 with username:root, attempt#1 of 5 2022-02-25 06:11:57,404 - root - INFO - SSH Connected to 172.23.100.34 as root 2022-02-25 06:11:57,684 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:57,943 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:57,945 - root - INFO - SSH Connecting to 172.23.100.35 with username:root, attempt#1 of 5 2022-02-25 06:11:58,069 - root - INFO - SSH Connected to 172.23.100.35 as root 2022-02-25 06:11:58,377 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:58,659 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:58,660 - root - INFO - SSH Connecting to 172.23.100.36 with username:root, attempt#1 of 5 2022-02-25 06:11:58,763 - root - INFO - SSH Connected to 172.23.100.36 as root 2022-02-25 06:11:59,033 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:59,281 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:59,282 - root - INFO - SSH Connecting to 172.23.100.37 with username:root, attempt#1 of 5 2022-02-25 06:11:59,388 - root - INFO - SSH Connected to 172.23.100.37 as root 2022-02-25 06:11:59,663 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:11:59,910 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:11:59,912 - root - INFO - SSH Connecting to 172.23.105.164 with username:root, attempt#1 of 5 2022-02-25 06:12:00,020 - root - INFO - SSH Connected to 172.23.105.164 as root 2022-02-25 06:12:00,313 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:00,589 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:00,590 - root - INFO - SSH Connecting to 172.23.105.206 with username:root, attempt#1 of 5 2022-02-25 06:12:00,701 - root - INFO - SSH Connected to 172.23.105.206 as root 2022-02-25 06:12:00,979 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:01,258 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:01,259 - root - INFO - SSH Connecting to 172.23.106.177 with username:root, attempt#1 of 5 2022-02-25 06:12:01,382 - root - INFO - SSH Connected to 172.23.106.177 as root 2022-02-25 06:12:01,682 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:01,972 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:01,974 - root - INFO - SSH Connecting to 172.23.100.34 with username:root, attempt#1 of 5 2022-02-25 06:12:02,086 - root - INFO - SSH Connected to 172.23.100.34 as root 2022-02-25 06:12:02,351 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:02,638 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:02,639 - root - INFO - SSH Connecting to 172.23.100.35 with username:root, attempt#1 of 5 2022-02-25 06:12:02,750 - root - INFO - SSH Connected to 172.23.100.35 as root 2022-02-25 06:12:03,037 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:03,303 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:03,304 - root - INFO - SSH Connecting to 172.23.100.36 with username:root, attempt#1 of 5 2022-02-25 06:12:03,404 - root - INFO - SSH Connected to 172.23.100.36 as root 2022-02-25 06:12:03,664 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:03,908 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:03,909 - root - INFO - SSH Connecting to 172.23.100.37 with username:root, attempt#1 of 5 2022-02-25 06:12:04,010 - root - INFO - SSH Connected to 172.23.100.37 as root 2022-02-25 06:12:04,274 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2022-02-25 06:12:04,523 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2022-02-25 06:12:04,523 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,526 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,527 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,528 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,528 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,529 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,529 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,530 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,530 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,531 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,531 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,532 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:04,532 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:04,533 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/neo/2385/couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm is live 2022-02-25 06:12:12,506 - root - INFO - Done with uninstall on 172.23.100.36. 2022-02-25 06:12:16,367 - root - INFO - Done with uninstall on 172.23.100.37. 2022-02-25 06:12:23,855 - root - INFO - Done with uninstall on 172.23.105.164. 2022-02-25 06:12:24,510 - root - INFO - Done with uninstall on 172.23.106.177. 2022-02-25 06:12:26,330 - root - INFO - Done with uninstall on 172.23.105.206. 2022-02-25 06:12:29,155 - root - INFO - Done with uninstall on 172.23.100.35. 2022-02-25 06:12:29,203 - root - INFO - Done with uninstall on 172.23.100.34. 2022-02-25 06:12:30,686 - root - INFO - running command.raw on 172.23.105.164: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:30,693 - root - INFO - command executed successfully with root 2022-02-25 06:12:31,658 - root - INFO - running command.raw on 172.23.105.206: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:31,664 - root - INFO - command executed successfully with root 2022-02-25 06:12:32,716 - root - INFO - running command.raw on 172.23.106.177: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:32,722 - root - INFO - command executed successfully with root 2022-02-25 06:12:33,730 - root - INFO - running command.raw on 172.23.100.34: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:33,736 - root - INFO - command executed successfully with root 2022-02-25 06:12:34,728 - root - INFO - running command.raw on 172.23.100.35: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:34,734 - root - INFO - command executed successfully with root 2022-02-25 06:12:35,745 - root - INFO - running command.raw on 172.23.100.36: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:35,751 - root - INFO - command executed successfully with root 2022-02-25 06:12:36,811 - root - INFO - running command.raw on 172.23.100.37: cd /tmp/ && wc -c couchbase-server-enterprise-7.1.0-2385-centos7.x86_64.rpm 2022-02-25 06:12:36,817 - root - INFO - command executed successfully with root 2022-02-25 06:13:32,760 - root - INFO - Done with install on 172.23.105.206. 2022-02-25 06:13:32,760 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:13:32,761 - root - ERROR - socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:33,032 - root - INFO - Done with install on 172.23.105.164. 2022-02-25 06:13:33,032 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:13:33,033 - root - ERROR - socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:33,902 - root - INFO - Done with install on 172.23.106.177. 2022-02-25 06:13:33,902 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:13:33,903 - root - ERROR - socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:35,765 - root - ERROR - socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:36,037 - root - ERROR - socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:36,907 - root - ERROR - socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:13:41,777 - root - INFO - running command.raw on 172.23.105.206: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.105.206 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:13:42,049 - root - INFO - running command.raw on 172.23.105.164: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.105.164 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:13:42,173 - root - INFO - command executed successfully with root 2022-02-25 06:13:42,175 - root - INFO - running command.raw on 172.23.105.206: rm -rf /data/* 2022-02-25 06:13:42,455 - root - INFO - command executed successfully with root 2022-02-25 06:13:42,458 - root - INFO - running command.raw on 172.23.105.164: rm -rf /data/* 2022-02-25 06:13:42,919 - root - INFO - running command.raw on 172.23.106.177: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.106.177 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:13:43,317 - root - INFO - command executed successfully with root 2022-02-25 06:13:43,321 - root - INFO - running command.raw on 172.23.106.177: rm -rf /data/* 2022-02-25 06:13:46,040 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,040 - root - INFO - running command.raw on 172.23.105.206: chown -R couchbase:couchbase /data 2022-02-25 06:13:46,087 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,087 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:13:46,598 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,598 - root - INFO - running command.raw on 172.23.105.164: chown -R couchbase:couchbase /data 2022-02-25 06:13:46,645 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,645 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:13:46,935 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,936 - root - INFO - running command.raw on 172.23.106.177: chown -R couchbase:couchbase /data 2022-02-25 06:13:46,983 - root - INFO - command executed successfully with root 2022-02-25 06:13:46,983 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:13:48,371 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:13:49,044 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:13:49,377 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.105.206 2022-02-25 06:13:49,377 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:13:49,380 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:13:49,380 - root - INFO - /node/controller/setupServices params on 172.23.105.206: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:13:49,409 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:13:49,409 - root - INFO - settings/web params on 172.23.105.206:8091:port=8091&username=Administrator&password=password 2022-02-25 06:13:49,467 - root - INFO - --> status:True 2022-02-25 06:13:49,468 - root - INFO - Done with init on 172.23.105.206. 2022-02-25 06:13:49,566 - root - INFO - Done with cleanup on 172.23.105.206. 2022-02-25 06:13:49,722 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:13:50,048 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.105.164 2022-02-25 06:13:50,048 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:13:50,052 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:13:50,052 - root - INFO - /node/controller/setupServices params on 172.23.105.164: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:13:50,080 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:13:50,080 - root - INFO - settings/web params on 172.23.105.164:8091:port=8091&username=Administrator&password=password 2022-02-25 06:13:50,138 - root - INFO - --> status:True 2022-02-25 06:13:50,138 - root - INFO - Done with init on 172.23.105.164. 2022-02-25 06:13:50,231 - root - INFO - Done with cleanup on 172.23.105.164. 2022-02-25 06:13:50,726 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.106.177 2022-02-25 06:13:50,726 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:13:50,729 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:13:50,730 - root - INFO - /node/controller/setupServices params on 172.23.106.177: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:13:50,758 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:13:50,758 - root - INFO - settings/web params on 172.23.106.177:8091:port=8091&username=Administrator&password=password 2022-02-25 06:13:50,815 - root - INFO - --> status:True 2022-02-25 06:13:50,815 - root - INFO - Done with init on 172.23.106.177. 2022-02-25 06:13:50,911 - root - INFO - Done with cleanup on 172.23.106.177. 2022-02-25 06:16:00,868 - root - INFO - Done with install on 172.23.100.36. 2022-02-25 06:16:00,868 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:16:00,869 - root - ERROR - socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:03,347 - root - INFO - Done with install on 172.23.100.35. 2022-02-25 06:16:03,347 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:16:03,348 - root - ERROR - socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:03,873 - root - ERROR - socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:06,352 - root - ERROR - socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:09,886 - root - INFO - running command.raw on 172.23.100.36: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.36 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:16:10,459 - root - INFO - command executed successfully with root 2022-02-25 06:16:10,462 - root - INFO - running command.raw on 172.23.100.36: rm -rf /data/* 2022-02-25 06:16:10,469 - root - INFO - command executed successfully with root 2022-02-25 06:16:10,469 - root - INFO - running command.raw on 172.23.100.36: chown -R couchbase:couchbase /data 2022-02-25 06:16:10,514 - root - INFO - command executed successfully with root 2022-02-25 06:16:10,514 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:16:10,796 - root - INFO - Done with install on 172.23.100.34. 2022-02-25 06:16:10,796 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:16:10,797 - root - ERROR - socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:10,918 - root - INFO - Done with install on 172.23.100.37. 2022-02-25 06:16:10,918 - root - INFO - Waiting for couchbase to be reachable 2022-02-25 06:16:10,919 - root - ERROR - socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:12,359 - root - ERROR - socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:13,800 - root - ERROR - socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:13,852 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:16:13,923 - root - ERROR - socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:16:14,857 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.100.36 2022-02-25 06:16:14,857 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:16:14,861 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:16:14,861 - root - INFO - /node/controller/setupServices params on 172.23.100.36: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:16:14,907 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:16:14,907 - root - INFO - settings/web params on 172.23.100.36:8091:port=8091&username=Administrator&password=password 2022-02-25 06:16:14,965 - root - INFO - --> status:True 2022-02-25 06:16:14,965 - root - INFO - Done with init on 172.23.100.36. 2022-02-25 06:16:15,070 - root - INFO - Done with cleanup on 172.23.100.36. 2022-02-25 06:16:19,812 - root - INFO - running command.raw on 172.23.100.34: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.34 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:16:19,934 - root - INFO - running command.raw on 172.23.100.37: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.37 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:16:20,343 - root - INFO - command executed successfully with root 2022-02-25 06:16:20,346 - root - INFO - running command.raw on 172.23.100.34: rm -rf /data/* 2022-02-25 06:16:20,441 - root - INFO - command executed successfully with root 2022-02-25 06:16:20,444 - root - INFO - running command.raw on 172.23.100.37: rm -rf /data/* 2022-02-25 06:16:20,451 - root - INFO - command executed successfully with root 2022-02-25 06:16:20,451 - root - INFO - running command.raw on 172.23.100.37: chown -R couchbase:couchbase /data 2022-02-25 06:16:20,497 - root - INFO - command executed successfully with root 2022-02-25 06:16:20,497 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:16:23,456 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:16:24,462 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.100.37 2022-02-25 06:16:24,462 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:16:24,466 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:16:24,466 - root - INFO - /node/controller/setupServices params on 172.23.100.37: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:16:24,841 - root - INFO - command executed successfully with root 2022-02-25 06:16:24,841 - root - INFO - running command.raw on 172.23.100.34: chown -R couchbase:couchbase /data 2022-02-25 06:16:24,888 - root - INFO - command executed successfully with root 2022-02-25 06:16:24,889 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:16:25,400 - root - INFO - running command.raw on 172.23.100.35: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.35 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2022-02-25 06:16:26,297 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:16:26,297 - root - INFO - settings/web params on 172.23.100.37:8091:port=8091&username=Administrator&password=password 2022-02-25 06:16:26,353 - root - INFO - command executed successfully with root 2022-02-25 06:16:26,356 - root - INFO - running command.raw on 172.23.100.35: rm -rf /data/* 2022-02-25 06:16:26,357 - root - INFO - --> status:True 2022-02-25 06:16:26,357 - root - INFO - Done with init on 172.23.100.37. 2022-02-25 06:16:26,456 - root - INFO - Done with cleanup on 172.23.100.37. 2022-02-25 06:16:28,565 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:16:29,572 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.100.34 2022-02-25 06:16:29,572 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:16:29,575 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:16:29,575 - root - INFO - /node/controller/setupServices params on 172.23.100.34: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:16:29,623 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:16:29,623 - root - INFO - settings/web params on 172.23.100.34:8091:port=8091&username=Administrator&password=password 2022-02-25 06:16:29,680 - root - INFO - --> status:True 2022-02-25 06:16:29,680 - root - INFO - Done with init on 172.23.100.34. 2022-02-25 06:16:29,771 - root - INFO - Done with cleanup on 172.23.100.34. 2022-02-25 06:16:30,561 - root - INFO - command executed successfully with root 2022-02-25 06:16:30,561 - root - INFO - running command.raw on 172.23.100.35: chown -R couchbase:couchbase /data 2022-02-25 06:16:30,609 - root - INFO - command executed successfully with root 2022-02-25 06:16:30,609 - root - INFO - /nodes/self/controller/settings : path=%2Fdata 2022-02-25 06:16:33,374 - root - INFO - Setting data_path: /data: status True 2022-02-25 06:16:34,379 - root - INFO - Setting KV memory quota as 7855 MB on 172.23.100.35 2022-02-25 06:16:34,379 - root - INFO - pools/default params : memoryQuota=7855 2022-02-25 06:16:34,382 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2022-02-25 06:16:34,383 - root - INFO - /node/controller/setupServices params on 172.23.100.35: 8091:hostname=None&user=Administrator&password=password&services=kv 2022-02-25 06:16:34,448 - root - INFO - --> in init_cluster...Administrator,password,8091 2022-02-25 06:16:34,448 - root - INFO - settings/web params on 172.23.100.35:8091:port=8091&username=Administrator&password=password 2022-02-25 06:16:34,506 - root - INFO - --> status:True 2022-02-25 06:16:34,506 - root - INFO - Done with init on 172.23.100.35. 2022-02-25 06:16:34,615 - root - INFO - Done with cleanup on 172.23.100.35. 2022-02-25 06:16:37,100 - root - INFO - ---------------------------------------------------------------------------------------------------- 2022-02-25 06:16:37,130 - root - INFO - cluster:C1 node:172.23.105.164:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C2 node:172.23.105.206:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C3 node:172.23.106.177:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C4 node:172.23.100.34:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C5 node:172.23.100.35:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C6 node:172.23.100.36:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - cluster:C7 node:172.23.100.37:8091 version:7.1.0-2385-enterprise aFamily:inet services:['kv'] 2022-02-25 06:16:37,130 - root - INFO - ---------------------------------------------------------------------------------------------------- 2022-02-25 06:16:37,130 - root - INFO - ---------------------------------------------------------------------------------------------------- 2022-02-25 06:16:37,130 - root - INFO - ---------------------------------------------------------------------------------------------------- 2022-02-25 06:16:37,130 - root - INFO - INSTALL COMPLETED ON: 172.23.105.164 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.105.206 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.106.177 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.100.34 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.100.35 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.100.36 2022-02-25 06:16:37,131 - root - INFO - INSTALL COMPLETED ON: 172.23.100.37 2022-02-25 06:16:37,131 - root - INFO - ---------------------------------------------------------------------------------------------------- 2022-02-25 06:16:37,131 - root - INFO - TOTAL INSTALL TIME = 282 seconds + status=0 + cd ../.. + rm -rf tr_for_install + guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P transaction_version=1.1.8 -P client_version=3.1.6 -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest' Starting a Gradle Daemon, 82 busy Daemons could not be reused, use --status for details > Configure project : Executing 'gradle clean' Using Transaction_client :: 1.1.8 Using Java_client :: 3.1.6 Running: /opt/jython/bin/jython -J-cp /data/workspace/temp_rebalance_magma/build/classes/java/main:/data/workspace/temp_rebalance_magma/build/resources/main:/root/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.54/da3584329a263616e277e15462b387addd1b208d/jsch-0.1.54.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.11.1/4b41b53a3a2d299ce381a69d165381ca19f62912/log4j-slf4j-impl-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-blob/12.14.0/75d45d21dc208fa369abc832354470464c859d67/azure-storage-blob-12.14.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/couchbase-transactions/1.1.8/e69a6013e59f498f76671c1acc43df14f1163180/couchbase-transactions-1.1.8.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/java-client/3.1.6/f065d71963e08bd5577838592e33f2ca35f5d64a/java-client-3.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-internal-avro/12.1.0/c5882c519f8ee15c528e9d0396ec7535da7d4ecd/azure-storage-internal-avro-12.1.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-common/12.13.0/ef8910849375def2b715ea64429136e694fb7ad6/azure-storage-common-12.13.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core-http-netty/1.11.0/2177ae2dc9cff849e49f9180bf6b95b8e2c78e1b/azure-core-http-netty-1.11.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core/1.20.0/a98c6bd18aa2066ecd8b39bf7ac51bd8e7307851/azure-core-1.20.0.jar:/root/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.32/cdcff33940d9f2de763bc41ea05a0be5941176c3/slf4j-api-1.7.32.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.11.1/592a48674c926b01a9a747c7831bcd82a9e6d6e4/log4j-core-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.1/268f0fe4df3eefe052b57c87ec48517d64fb2a10/log4j-api-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/core-io/2.1.6/b3ece73ab7069b1e97669500edc448163c2f4304/core-io-2.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.12.4/15c743856696c0239f2c51d8d19d9f97f034713/jackson-dataformat-xml-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/2.12.4/5e43703aae1a9843dfd7df0a0ad6cbfedcaff67f/jackson-module-jaxb-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.12.4/6a1bd259b6c4e3f9219ec8ec0be55ed11eed0c/jackson-core-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.12.4/b1174c05d4ded121a7eaeed3f148709f9585b981/jackson-datatype-jsr310-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.12.4/69206e02e6a696034f06a59d3ddbfbba5a4cd81/jackson-databind-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.12.4/752cf9a2562ac2c012e48057e3a4c17dad66c66e/jackson-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty/1.0.10/c54f6c92628f396076f74945424319746bbb37f8/reactor-netty-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http-brave/1.0.10/30aebc029ee25f5e0c2d1c0a798b17e926f61b8/reactor-netty-http-brave-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http/1.0.10/f34196d8778243d2ac4a4ae7cc74f110d1073803/reactor-netty-http-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-core/1.0.10/dcb5d53dc07a6f660060f0b55c40a1e9944a6fe1/reactor-netty-core-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor/reactor-core/3.4.9/820332aa7b0fe3a8dfe14f58fc16e49ad178291/reactor-core-3.4.9.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.40.Final/6b73a163c13ed76921892d28eb81235f4b41e40a/netty-tcnative-boringssl-static-2.0.40.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.67.Final/2528292c49bd15f1b328dda25a7a75744d6f0991/netty-handler-proxy-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.67.Final/e2a0c27f396035fab7d932031d6f337244369495/netty-codec-http2-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.67.Final/e282137917c67332fa9a414df89f89a93487aede/netty-codec-http-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns-native-macos/4.1.66.Final/8ca314d87b202a4dd94d810d2385b22555dd801c/netty-resolver-dns-native-macos-4.1.66.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns/4.1.66.Final/7b74815fb1403e5747c872c6eee2a07e7a700d30/netty-resolver-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.67.Final/62640a6524e1c08d6e8ac06556892b5e1362392f/netty-handler-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.67.Final/ff955604e1edb8b13dc855ccbc84acb5fc1d989/netty-transport-native-epoll-4.1.67.Final-linux-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-kqueue/4.1.67.Final/7a88ccb4237d1ff091be544860ae729f5b568ac/netty-transport-native-kqueue-4.1.67.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.67.Final/c6cca7ba897e1ac65e0742654ed10064401dbd2f/netty-transport-native-unix-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.67.Final/a4fe0487451dc262daaa024224fa207d088b8687/netty-codec-socks-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-dns/4.1.66.Final/390ff96a0e1f7c626cb52c119a1a1dfd0784d193/netty-codec-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.67.Final/292e818822a4fca4e9f3ad711cdd46362f69590a/netty-codec-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.67.Final/e8b502618f960cb1f8cdb7dca92d0878576ad7c6/netty-transport-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.67.Final/4b6217b05792fe6e2bd080c054138b2c20ae1b37/netty-buffer-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.4/16b9f8ab972e67eb21872ea2c40046249d543989/woodstox-core-6.2.4.jar:/root/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.reactivestreams/reactive-streams/1.0.3/d9fb7a7926ffa635b3dcaa5049fb2bfa25b3e7d0/reactive-streams-1.0.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.67.Final/bb9041d5f85f9a6270f5378d0becd243de8cfeac/netty-resolver-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.67.Final/a86a19033588a07e159ff95818845b1ec86b2281/netty-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.2/8d49996a4338670764d7ca4b85a1c4ccf7fe665d/jakarta.xml.bind-api-2.3.2.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.activation/jakarta.activation-api/1.2.1/562a587face36ec7eff2db7f2fc95425c6602bc1/jakarta.activation-api-1.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave-instrumentation-http/5.13.3/fe70809f06c786b171e4597747c6f5a8c911fe5f/brave-instrumentation-http-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave/5.13.3/2d8ecb2352108f95b3e66ccc8027e9343ad9852c/brave-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter-brave/2.16.3/4d5017d71e4de139b6a31612cfd837b7c71d288c/zipkin-reporter-brave-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter/2.16.3/7e43d8be3376d305c355d969e8b9f3a62221380/zipkin-reporter-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.zipkin2/zipkin/2.23.2/1c2c7f2e91a3749311f7f75d0535d14ba2e2f6/zipkin-2.23.2.jar scripts/ssh.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest Running: /opt/jython/bin/jython -J-cp /data/workspace/temp_rebalance_magma/build/classes/java/main:/data/workspace/temp_rebalance_magma/build/resources/main:/root/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.54/da3584329a263616e277e15462b387addd1b208d/jsch-0.1.54.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.11.1/4b41b53a3a2d299ce381a69d165381ca19f62912/log4j-slf4j-impl-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-blob/12.14.0/75d45d21dc208fa369abc832354470464c859d67/azure-storage-blob-12.14.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/couchbase-transactions/1.1.8/e69a6013e59f498f76671c1acc43df14f1163180/couchbase-transactions-1.1.8.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/java-client/3.1.6/f065d71963e08bd5577838592e33f2ca35f5d64a/java-client-3.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-internal-avro/12.1.0/c5882c519f8ee15c528e9d0396ec7535da7d4ecd/azure-storage-internal-avro-12.1.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-common/12.13.0/ef8910849375def2b715ea64429136e694fb7ad6/azure-storage-common-12.13.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core-http-netty/1.11.0/2177ae2dc9cff849e49f9180bf6b95b8e2c78e1b/azure-core-http-netty-1.11.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core/1.20.0/a98c6bd18aa2066ecd8b39bf7ac51bd8e7307851/azure-core-1.20.0.jar:/root/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.32/cdcff33940d9f2de763bc41ea05a0be5941176c3/slf4j-api-1.7.32.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.11.1/592a48674c926b01a9a747c7831bcd82a9e6d6e4/log4j-core-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.1/268f0fe4df3eefe052b57c87ec48517d64fb2a10/log4j-api-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/core-io/2.1.6/b3ece73ab7069b1e97669500edc448163c2f4304/core-io-2.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.12.4/15c743856696c0239f2c51d8d19d9f97f034713/jackson-dataformat-xml-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/2.12.4/5e43703aae1a9843dfd7df0a0ad6cbfedcaff67f/jackson-module-jaxb-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.12.4/6a1bd259b6c4e3f9219ec8ec0be55ed11eed0c/jackson-core-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.12.4/b1174c05d4ded121a7eaeed3f148709f9585b981/jackson-datatype-jsr310-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.12.4/69206e02e6a696034f06a59d3ddbfbba5a4cd81/jackson-databind-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.12.4/752cf9a2562ac2c012e48057e3a4c17dad66c66e/jackson-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty/1.0.10/c54f6c92628f396076f74945424319746bbb37f8/reactor-netty-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http-brave/1.0.10/30aebc029ee25f5e0c2d1c0a798b17e926f61b8/reactor-netty-http-brave-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http/1.0.10/f34196d8778243d2ac4a4ae7cc74f110d1073803/reactor-netty-http-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-core/1.0.10/dcb5d53dc07a6f660060f0b55c40a1e9944a6fe1/reactor-netty-core-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor/reactor-core/3.4.9/820332aa7b0fe3a8dfe14f58fc16e49ad178291/reactor-core-3.4.9.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.40.Final/6b73a163c13ed76921892d28eb81235f4b41e40a/netty-tcnative-boringssl-static-2.0.40.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.67.Final/2528292c49bd15f1b328dda25a7a75744d6f0991/netty-handler-proxy-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.67.Final/e2a0c27f396035fab7d932031d6f337244369495/netty-codec-http2-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.67.Final/e282137917c67332fa9a414df89f89a93487aede/netty-codec-http-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns-native-macos/4.1.66.Final/8ca314d87b202a4dd94d810d2385b22555dd801c/netty-resolver-dns-native-macos-4.1.66.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns/4.1.66.Final/7b74815fb1403e5747c872c6eee2a07e7a700d30/netty-resolver-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.67.Final/62640a6524e1c08d6e8ac06556892b5e1362392f/netty-handler-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.67.Final/ff955604e1edb8b13dc855ccbc84acb5fc1d989/netty-transport-native-epoll-4.1.67.Final-linux-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-kqueue/4.1.67.Final/7a88ccb4237d1ff091be544860ae729f5b568ac/netty-transport-native-kqueue-4.1.67.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.67.Final/c6cca7ba897e1ac65e0742654ed10064401dbd2f/netty-transport-native-unix-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.67.Final/a4fe0487451dc262daaa024224fa207d088b8687/netty-codec-socks-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-dns/4.1.66.Final/390ff96a0e1f7c626cb52c119a1a1dfd0784d193/netty-codec-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.67.Final/292e818822a4fca4e9f3ad711cdd46362f69590a/netty-codec-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.67.Final/e8b502618f960cb1f8cdb7dca92d0878576ad7c6/netty-transport-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.67.Final/4b6217b05792fe6e2bd080c054138b2c20ae1b37/netty-buffer-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.4/16b9f8ab972e67eb21872ea2c40046249d543989/woodstox-core-6.2.4.jar:/root/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.reactivestreams/reactive-streams/1.0.3/d9fb7a7926ffa635b3dcaa5049fb2bfa25b3e7d0/reactive-streams-1.0.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.67.Final/bb9041d5f85f9a6270f5378d0becd243de8cfeac/netty-resolver-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.67.Final/a86a19033588a07e159ff95818845b1ec86b2281/netty-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.2/8d49996a4338670764d7ca4b85a1c4ccf7fe665d/jakarta.xml.bind-api-2.3.2.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.activation/jakarta.activation-api/1.2.1/562a587face36ec7eff2db7f2fc95425c6602bc1/jakarta.activation-api-1.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave-instrumentation-http/5.13.3/fe70809f06c786b171e4597747c6f5a8c911fe5f/brave-instrumentation-http-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave/5.13.3/2d8ecb2352108f95b3e66ccc8027e9343ad9852c/brave-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter-brave/2.16.3/4d5017d71e4de139b6a31612cfd837b7c71d288c/zipkin-reporter-brave-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter/2.16.3/7e43d8be3376d305c355d969e8b9f3a62221380/zipkin-reporter-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.zipkin2/zipkin/2.23.2/1c2c7f2e91a3749311f7f75d0535d14ba2e2f6/zipkin-2.23.2.jar scripts/eagles_all_around.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest Running: /opt/jython/bin/jython -J-cp /data/workspace/temp_rebalance_magma/build/classes/java/main:/data/workspace/temp_rebalance_magma/build/resources/main:/root/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.54/da3584329a263616e277e15462b387addd1b208d/jsch-0.1.54.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.11.1/4b41b53a3a2d299ce381a69d165381ca19f62912/log4j-slf4j-impl-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-blob/12.14.0/75d45d21dc208fa369abc832354470464c859d67/azure-storage-blob-12.14.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/couchbase-transactions/1.1.8/e69a6013e59f498f76671c1acc43df14f1163180/couchbase-transactions-1.1.8.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/java-client/3.1.6/f065d71963e08bd5577838592e33f2ca35f5d64a/java-client-3.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-internal-avro/12.1.0/c5882c519f8ee15c528e9d0396ec7535da7d4ecd/azure-storage-internal-avro-12.1.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-common/12.13.0/ef8910849375def2b715ea64429136e694fb7ad6/azure-storage-common-12.13.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core-http-netty/1.11.0/2177ae2dc9cff849e49f9180bf6b95b8e2c78e1b/azure-core-http-netty-1.11.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core/1.20.0/a98c6bd18aa2066ecd8b39bf7ac51bd8e7307851/azure-core-1.20.0.jar:/root/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.32/cdcff33940d9f2de763bc41ea05a0be5941176c3/slf4j-api-1.7.32.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.11.1/592a48674c926b01a9a747c7831bcd82a9e6d6e4/log4j-core-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.1/268f0fe4df3eefe052b57c87ec48517d64fb2a10/log4j-api-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/core-io/2.1.6/b3ece73ab7069b1e97669500edc448163c2f4304/core-io-2.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.12.4/15c743856696c0239f2c51d8d19d9f97f034713/jackson-dataformat-xml-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/2.12.4/5e43703aae1a9843dfd7df0a0ad6cbfedcaff67f/jackson-module-jaxb-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.12.4/6a1bd259b6c4e3f9219ec8ec0be55ed11eed0c/jackson-core-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.12.4/b1174c05d4ded121a7eaeed3f148709f9585b981/jackson-datatype-jsr310-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.12.4/69206e02e6a696034f06a59d3ddbfbba5a4cd81/jackson-databind-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.12.4/752cf9a2562ac2c012e48057e3a4c17dad66c66e/jackson-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty/1.0.10/c54f6c92628f396076f74945424319746bbb37f8/reactor-netty-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http-brave/1.0.10/30aebc029ee25f5e0c2d1c0a798b17e926f61b8/reactor-netty-http-brave-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http/1.0.10/f34196d8778243d2ac4a4ae7cc74f110d1073803/reactor-netty-http-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-core/1.0.10/dcb5d53dc07a6f660060f0b55c40a1e9944a6fe1/reactor-netty-core-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor/reactor-core/3.4.9/820332aa7b0fe3a8dfe14f58fc16e49ad178291/reactor-core-3.4.9.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.40.Final/6b73a163c13ed76921892d28eb81235f4b41e40a/netty-tcnative-boringssl-static-2.0.40.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.67.Final/2528292c49bd15f1b328dda25a7a75744d6f0991/netty-handler-proxy-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.67.Final/e2a0c27f396035fab7d932031d6f337244369495/netty-codec-http2-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.67.Final/e282137917c67332fa9a414df89f89a93487aede/netty-codec-http-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns-native-macos/4.1.66.Final/8ca314d87b202a4dd94d810d2385b22555dd801c/netty-resolver-dns-native-macos-4.1.66.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns/4.1.66.Final/7b74815fb1403e5747c872c6eee2a07e7a700d30/netty-resolver-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.67.Final/62640a6524e1c08d6e8ac06556892b5e1362392f/netty-handler-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.67.Final/ff955604e1edb8b13dc855ccbc84acb5fc1d989/netty-transport-native-epoll-4.1.67.Final-linux-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-kqueue/4.1.67.Final/7a88ccb4237d1ff091be544860ae729f5b568ac/netty-transport-native-kqueue-4.1.67.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.67.Final/c6cca7ba897e1ac65e0742654ed10064401dbd2f/netty-transport-native-unix-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.67.Final/a4fe0487451dc262daaa024224fa207d088b8687/netty-codec-socks-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-dns/4.1.66.Final/390ff96a0e1f7c626cb52c119a1a1dfd0784d193/netty-codec-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.67.Final/292e818822a4fca4e9f3ad711cdd46362f69590a/netty-codec-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.67.Final/e8b502618f960cb1f8cdb7dca92d0878576ad7c6/netty-transport-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.67.Final/4b6217b05792fe6e2bd080c054138b2c20ae1b37/netty-buffer-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.4/16b9f8ab972e67eb21872ea2c40046249d543989/woodstox-core-6.2.4.jar:/root/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.reactivestreams/reactive-streams/1.0.3/d9fb7a7926ffa635b3dcaa5049fb2bfa25b3e7d0/reactive-streams-1.0.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.67.Final/bb9041d5f85f9a6270f5378d0becd243de8cfeac/netty-resolver-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.67.Final/a86a19033588a07e159ff95818845b1ec86b2281/netty-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.2/8d49996a4338670764d7ca4b85a1c4ccf7fe665d/jakarta.xml.bind-api-2.3.2.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.activation/jakarta.activation-api/1.2.1/562a587face36ec7eff2db7f2fc95425c6602bc1/jakarta.activation-api-1.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave-instrumentation-http/5.13.3/fe70809f06c786b171e4597747c6f5a8c911fe5f/brave-instrumentation-http-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave/5.13.3/2d8ecb2352108f95b3e66ccc8027e9343ad9852c/brave-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter-brave/2.16.3/4d5017d71e4de139b6a31612cfd837b7c71d288c/zipkin-reporter-brave-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter/2.16.3/7e43d8be3376d305c355d969e8b9f3a62221380/zipkin-reporter-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.zipkin2/zipkin/2.23.2/1c2c7f2e91a3749311f7f75d0535d14ba2e2f6/zipkin-2.23.2.jar:build/classes/java/main scripts/install.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest Running: /opt/jython/bin/jython -J-cp /data/workspace/temp_rebalance_magma/build/classes/java/main:/data/workspace/temp_rebalance_magma/build/resources/main:/root/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.54/da3584329a263616e277e15462b387addd1b208d/jsch-0.1.54.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.11.1/4b41b53a3a2d299ce381a69d165381ca19f62912/log4j-slf4j-impl-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-blob/12.14.0/75d45d21dc208fa369abc832354470464c859d67/azure-storage-blob-12.14.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/couchbase-transactions/1.1.8/e69a6013e59f498f76671c1acc43df14f1163180/couchbase-transactions-1.1.8.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/java-client/3.1.6/f065d71963e08bd5577838592e33f2ca35f5d64a/java-client-3.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-internal-avro/12.1.0/c5882c519f8ee15c528e9d0396ec7535da7d4ecd/azure-storage-internal-avro-12.1.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-common/12.13.0/ef8910849375def2b715ea64429136e694fb7ad6/azure-storage-common-12.13.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core-http-netty/1.11.0/2177ae2dc9cff849e49f9180bf6b95b8e2c78e1b/azure-core-http-netty-1.11.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core/1.20.0/a98c6bd18aa2066ecd8b39bf7ac51bd8e7307851/azure-core-1.20.0.jar:/root/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.32/cdcff33940d9f2de763bc41ea05a0be5941176c3/slf4j-api-1.7.32.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.11.1/592a48674c926b01a9a747c7831bcd82a9e6d6e4/log4j-core-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.1/268f0fe4df3eefe052b57c87ec48517d64fb2a10/log4j-api-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/core-io/2.1.6/b3ece73ab7069b1e97669500edc448163c2f4304/core-io-2.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.12.4/15c743856696c0239f2c51d8d19d9f97f034713/jackson-dataformat-xml-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/2.12.4/5e43703aae1a9843dfd7df0a0ad6cbfedcaff67f/jackson-module-jaxb-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.12.4/6a1bd259b6c4e3f9219ec8ec0be55ed11eed0c/jackson-core-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.12.4/b1174c05d4ded121a7eaeed3f148709f9585b981/jackson-datatype-jsr310-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.12.4/69206e02e6a696034f06a59d3ddbfbba5a4cd81/jackson-databind-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.12.4/752cf9a2562ac2c012e48057e3a4c17dad66c66e/jackson-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty/1.0.10/c54f6c92628f396076f74945424319746bbb37f8/reactor-netty-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http-brave/1.0.10/30aebc029ee25f5e0c2d1c0a798b17e926f61b8/reactor-netty-http-brave-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http/1.0.10/f34196d8778243d2ac4a4ae7cc74f110d1073803/reactor-netty-http-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-core/1.0.10/dcb5d53dc07a6f660060f0b55c40a1e9944a6fe1/reactor-netty-core-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor/reactor-core/3.4.9/820332aa7b0fe3a8dfe14f58fc16e49ad178291/reactor-core-3.4.9.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.40.Final/6b73a163c13ed76921892d28eb81235f4b41e40a/netty-tcnative-boringssl-static-2.0.40.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.67.Final/2528292c49bd15f1b328dda25a7a75744d6f0991/netty-handler-proxy-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.67.Final/e2a0c27f396035fab7d932031d6f337244369495/netty-codec-http2-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.67.Final/e282137917c67332fa9a414df89f89a93487aede/netty-codec-http-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns-native-macos/4.1.66.Final/8ca314d87b202a4dd94d810d2385b22555dd801c/netty-resolver-dns-native-macos-4.1.66.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns/4.1.66.Final/7b74815fb1403e5747c872c6eee2a07e7a700d30/netty-resolver-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.67.Final/62640a6524e1c08d6e8ac06556892b5e1362392f/netty-handler-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.67.Final/ff955604e1edb8b13dc855ccbc84acb5fc1d989/netty-transport-native-epoll-4.1.67.Final-linux-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-kqueue/4.1.67.Final/7a88ccb4237d1ff091be544860ae729f5b568ac/netty-transport-native-kqueue-4.1.67.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.67.Final/c6cca7ba897e1ac65e0742654ed10064401dbd2f/netty-transport-native-unix-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.67.Final/a4fe0487451dc262daaa024224fa207d088b8687/netty-codec-socks-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-dns/4.1.66.Final/390ff96a0e1f7c626cb52c119a1a1dfd0784d193/netty-codec-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.67.Final/292e818822a4fca4e9f3ad711cdd46362f69590a/netty-codec-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.67.Final/e8b502618f960cb1f8cdb7dca92d0878576ad7c6/netty-transport-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.67.Final/4b6217b05792fe6e2bd080c054138b2c20ae1b37/netty-buffer-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.4/16b9f8ab972e67eb21872ea2c40046249d543989/woodstox-core-6.2.4.jar:/root/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.reactivestreams/reactive-streams/1.0.3/d9fb7a7926ffa635b3dcaa5049fb2bfa25b3e7d0/reactive-streams-1.0.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.67.Final/bb9041d5f85f9a6270f5378d0becd243de8cfeac/netty-resolver-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.67.Final/a86a19033588a07e159ff95818845b1ec86b2281/netty-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.2/8d49996a4338670764d7ca4b85a1c4ccf7fe665d/jakarta.xml.bind-api-2.3.2.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.activation/jakarta.activation-api/1.2.1/562a587face36ec7eff2db7f2fc95425c6602bc1/jakarta.activation-api-1.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave-instrumentation-http/5.13.3/fe70809f06c786b171e4597747c6f5a8c911fe5f/brave-instrumentation-http-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave/5.13.3/2d8ecb2352108f95b3e66ccc8027e9343ad9852c/brave-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter-brave/2.16.3/4d5017d71e4de139b6a31612cfd837b7c71d288c/zipkin-reporter-brave-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter/2.16.3/7e43d8be3376d305c355d969e8b9f3a62221380/zipkin-reporter-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.zipkin2/zipkin/2.23.2/1c2c7f2e91a3749311f7f75d0535d14ba2e2f6/zipkin-2.23.2.jar:build/classes/java/main:src/main/resources testrunner.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest Running: /opt/jython/bin/jython -J-cp /data/workspace/temp_rebalance_magma/build/classes/java/main:/data/workspace/temp_rebalance_magma/build/resources/main:/root/.gradle/caches/modules-2/files-2.1/com.jcraft/jsch/0.1.54/da3584329a263616e277e15462b387addd1b208d/jsch-0.1.54.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-slf4j-impl/2.11.1/4b41b53a3a2d299ce381a69d165381ca19f62912/log4j-slf4j-impl-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/commons-cli/commons-cli/1.4/c51c00206bb913cd8612b24abd9fa98ae89719b1/commons-cli-1.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-blob/12.14.0/75d45d21dc208fa369abc832354470464c859d67/azure-storage-blob-12.14.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/couchbase-transactions/1.1.8/e69a6013e59f498f76671c1acc43df14f1163180/couchbase-transactions-1.1.8.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/java-client/3.1.6/f065d71963e08bd5577838592e33f2ca35f5d64a/java-client-3.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-internal-avro/12.1.0/c5882c519f8ee15c528e9d0396ec7535da7d4ecd/azure-storage-internal-avro-12.1.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-storage-common/12.13.0/ef8910849375def2b715ea64429136e694fb7ad6/azure-storage-common-12.13.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core-http-netty/1.11.0/2177ae2dc9cff849e49f9180bf6b95b8e2c78e1b/azure-core-http-netty-1.11.0.jar:/root/.gradle/caches/modules-2/files-2.1/com.azure/azure-core/1.20.0/a98c6bd18aa2066ecd8b39bf7ac51bd8e7307851/azure-core-1.20.0.jar:/root/.gradle/caches/modules-2/files-2.1/org.slf4j/slf4j-api/1.7.32/cdcff33940d9f2de763bc41ea05a0be5941176c3/slf4j-api-1.7.32.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-core/2.11.1/592a48674c926b01a9a747c7831bcd82a9e6d6e4/log4j-core-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.apache.logging.log4j/log4j-api/2.11.1/268f0fe4df3eefe052b57c87ec48517d64fb2a10/log4j-api-2.11.1.jar:/root/.gradle/caches/modules-2/files-2.1/com.couchbase.client/core-io/2.1.6/b3ece73ab7069b1e97669500edc448163c2f4304/core-io-2.1.6.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.dataformat/jackson-dataformat-xml/2.12.4/15c743856696c0239f2c51d8d19d9f97f034713/jackson-dataformat-xml-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.module/jackson-module-jaxb-annotations/2.12.4/5e43703aae1a9843dfd7df0a0ad6cbfedcaff67f/jackson-module-jaxb-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-core/2.12.4/6a1bd259b6c4e3f9219ec8ec0be55ed11eed0c/jackson-core-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.datatype/jackson-datatype-jsr310/2.12.4/b1174c05d4ded121a7eaeed3f148709f9585b981/jackson-datatype-jsr310-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-databind/2.12.4/69206e02e6a696034f06a59d3ddbfbba5a4cd81/jackson-databind-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.jackson.core/jackson-annotations/2.12.4/752cf9a2562ac2c012e48057e3a4c17dad66c66e/jackson-annotations-2.12.4.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty/1.0.10/c54f6c92628f396076f74945424319746bbb37f8/reactor-netty-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http-brave/1.0.10/30aebc029ee25f5e0c2d1c0a798b17e926f61b8/reactor-netty-http-brave-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-http/1.0.10/f34196d8778243d2ac4a4ae7cc74f110d1073803/reactor-netty-http-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor.netty/reactor-netty-core/1.0.10/dcb5d53dc07a6f660060f0b55c40a1e9944a6fe1/reactor-netty-core-1.0.10.jar:/root/.gradle/caches/modules-2/files-2.1/io.projectreactor/reactor-core/3.4.9/820332aa7b0fe3a8dfe14f58fc16e49ad178291/reactor-core-3.4.9.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-tcnative-boringssl-static/2.0.40.Final/6b73a163c13ed76921892d28eb81235f4b41e40a/netty-tcnative-boringssl-static-2.0.40.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler-proxy/4.1.67.Final/2528292c49bd15f1b328dda25a7a75744d6f0991/netty-handler-proxy-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http2/4.1.67.Final/e2a0c27f396035fab7d932031d6f337244369495/netty-codec-http2-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-http/4.1.67.Final/e282137917c67332fa9a414df89f89a93487aede/netty-codec-http-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns-native-macos/4.1.66.Final/8ca314d87b202a4dd94d810d2385b22555dd801c/netty-resolver-dns-native-macos-4.1.66.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver-dns/4.1.66.Final/7b74815fb1403e5747c872c6eee2a07e7a700d30/netty-resolver-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-handler/4.1.67.Final/62640a6524e1c08d6e8ac06556892b5e1362392f/netty-handler-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-epoll/4.1.67.Final/ff955604e1edb8b13dc855ccbc84acb5fc1d989/netty-transport-native-epoll-4.1.67.Final-linux-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-kqueue/4.1.67.Final/7a88ccb4237d1ff091be544860ae729f5b568ac/netty-transport-native-kqueue-4.1.67.Final-osx-x86_64.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport-native-unix-common/4.1.67.Final/c6cca7ba897e1ac65e0742654ed10064401dbd2f/netty-transport-native-unix-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-socks/4.1.67.Final/a4fe0487451dc262daaa024224fa207d088b8687/netty-codec-socks-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec-dns/4.1.66.Final/390ff96a0e1f7c626cb52c119a1a1dfd0784d193/netty-codec-dns-4.1.66.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-codec/4.1.67.Final/292e818822a4fca4e9f3ad711cdd46362f69590a/netty-codec-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-transport/4.1.67.Final/e8b502618f960cb1f8cdb7dca92d0878576ad7c6/netty-transport-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-buffer/4.1.67.Final/4b6217b05792fe6e2bd080c054138b2c20ae1b37/netty-buffer-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/com.fasterxml.woodstox/woodstox-core/6.2.4/16b9f8ab972e67eb21872ea2c40046249d543989/woodstox-core-6.2.4.jar:/root/.gradle/caches/modules-2/files-2.1/org.codehaus.woodstox/stax2-api/4.2.1/a3f7325c52240418c2ba257b103c3c550e140c83/stax2-api-4.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/org.reactivestreams/reactive-streams/1.0.3/d9fb7a7926ffa635b3dcaa5049fb2bfa25b3e7d0/reactive-streams-1.0.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-resolver/4.1.67.Final/bb9041d5f85f9a6270f5378d0becd243de8cfeac/netty-resolver-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/io.netty/netty-common/4.1.67.Final/a86a19033588a07e159ff95818845b1ec86b2281/netty-common-4.1.67.Final.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.xml.bind/jakarta.xml.bind-api/2.3.2/8d49996a4338670764d7ca4b85a1c4ccf7fe665d/jakarta.xml.bind-api-2.3.2.jar:/root/.gradle/caches/modules-2/files-2.1/jakarta.activation/jakarta.activation-api/1.2.1/562a587face36ec7eff2db7f2fc95425c6602bc1/jakarta.activation-api-1.2.1.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave-instrumentation-http/5.13.3/fe70809f06c786b171e4597747c6f5a8c911fe5f/brave-instrumentation-http-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.brave/brave/5.13.3/2d8ecb2352108f95b3e66ccc8027e9343ad9852c/brave-5.13.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter-brave/2.16.3/4d5017d71e4de139b6a31612cfd837b7c71d288c/zipkin-reporter-brave-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.reporter2/zipkin-reporter/2.16.3/7e43d8be3376d305c355d969e8b9f3a62221380/zipkin-reporter-2.16.3.jar:/root/.gradle/caches/modules-2/files-2.1/io.zipkin.zipkin2/zipkin/2.23.2/1c2c7f2e91a3749311f7f75d0535d14ba2e2f6/zipkin-2.23.2.jar:build/classes/java/main:src/main/resources scripts/rerun_jobs.py -i /tmp/win10-bucket-ops-temp_rebalance_magma.ini -p rerun=False,get-cbcollect-info=True -c conf/magma/dgm_collections_1_percent_dgm.conf -m rest > Task :compileJava warning: unknown enum constant When.MAYBE reason: class file for javax.annotation.meta.When not found Note: Some input files use or override a deprecated API. Note: Recompile with -Xlint:deprecation for details. Note: Some input files use unchecked or unsafe operations. Note: Recompile with -Xlint:unchecked for details. 1 warning > Task :testrunner Filename: conf/magma/dgm_collections_1_percent_dgm.conf Prefix: bucket_collections.collections_rebalance.CollectionsRebalance Global Test input params: {'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'get-cbcollect-info': 'True', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'num_nodes': 7, 'rerun': 'False', 'spec': 'dgm_collections_1_percent_dgm'} Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_1 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery,nodes_init=5,nodes_failover=2,step_count=1,recovery_type=delta,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set0' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '2', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'step_count': '1', 'recovery_type': 'delta', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_1', 'nodes_init': '5', 'GROUP': 'failover_set0', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 1, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_graceful_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 06:16:59,411 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:16:59,938 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:17:00,390 | test | INFO | MainThread | [basetestcase:initialize_cluster:413] Initializing cluster : C1 Got this failure java.net.ConnectException: Connection refused: /172.23.105.164:8091 during connect (<_realsocket at 0xa type=client open_count=1 channel=[id: 0x3e30f022, 0.0.0.0/0.0.0.0:45196] timeout=300.0>) 2022-02-25 06:17:03,648 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.164:8091 during connect (<_realsocket at 0xb type=client open_count=1 channel=[id: 0xa0b671f7, 0.0.0.0/0.0.0.0:42461] timeout=300.0>) 2022-02-25 06:17:06,072 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.164:8091 during connect (<_realsocket at 0xc type=client open_count=1 channel=[id: 0x665140dd, 0.0.0.0/0.0.0.0:40585] timeout=300.0>) 2022-02-25 06:17:08,095 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.164:8091 during connect (<_realsocket at 0xd type=client open_count=1 channel=[id: 0x43a859e4, 0.0.0.0/0.0.0.0:34786] timeout=300.0>) 2022-02-25 06:17:10,115 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.164:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.206:8091 during connect (<_realsocket at 0xe type=client open_count=1 channel=[id: 0x036b06b2, 0.0.0.0/0.0.0.0:57808] timeout=300.0>) 2022-02-25 06:17:15,311 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.206:8091 during connect (<_realsocket at 0xf type=client open_count=1 channel=[id: 0x2f54025b, 0.0.0.0/0.0.0.0:43348] timeout=300.0>) 2022-02-25 06:17:17,328 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.206:8091 during connect (<_realsocket at 0x10 type=client open_count=1 channel=[id: 0x4ec3bacf, 0.0.0.0/0.0.0.0:40556] timeout=300.0>) 2022-02-25 06:17:19,345 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.105.206:8091 during connect (<_realsocket at 0x11 type=client open_count=1 channel=[id: 0x0fee867c, 0.0.0.0/0.0.0.0:56866] timeout=300.0>) 2022-02-25 06:17:21,361 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.105.206:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.106.177:8091 during connect (<_realsocket at 0x12 type=client open_count=1 channel=[id: 0xa6dfea59, 0.0.0.0/0.0.0.0:57996] timeout=300.0>) 2022-02-25 06:17:26,710 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.106.177:8091 during connect (<_realsocket at 0x13 type=client open_count=1 channel=[id: 0x14a472c0, 0.0.0.0/0.0.0.0:39635] timeout=300.0>) 2022-02-25 06:17:28,726 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.106.177:8091 during connect (<_realsocket at 0x14 type=client open_count=1 channel=[id: 0x75d433dd, 0.0.0.0/0.0.0.0:38747] timeout=300.0>) 2022-02-25 06:17:30,740 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.106.177:8091 during connect (<_realsocket at 0x15 type=client open_count=1 channel=[id: 0x2e0b331a, 0.0.0.0/0.0.0.0:55067] timeout=300.0>) 2022-02-25 06:17:32,755 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.106.177:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.34:8091 during connect (<_realsocket at 0x16 type=client open_count=1 channel=[id: 0x47ac5ff5, 0.0.0.0/0.0.0.0:35505] timeout=300.0>) 2022-02-25 06:17:37,805 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.34:8091 during connect (<_realsocket at 0x17 type=client open_count=1 channel=[id: 0xf932db23, 0.0.0.0/0.0.0.0:50071] timeout=300.0>) 2022-02-25 06:17:39,819 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.34:8091 during connect (<_realsocket at 0x18 type=client open_count=1 channel=[id: 0x2ffc8abd, 0.0.0.0/0.0.0.0:51787] timeout=300.0>) 2022-02-25 06:17:41,834 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.34:8091 during connect (<_realsocket at 0x19 type=client open_count=1 channel=[id: 0x81e674a1, 0.0.0.0/0.0.0.0:52637] timeout=300.0>) 2022-02-25 06:17:43,846 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.34:8091 during connect (<_realsocket at 0x1a type=client open_count=1 channel=[id: 0xf1d073d3, 0.0.0.0/0.0.0.0:50966] timeout=300.0>) 2022-02-25 06:17:45,861 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.34:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.35:8091 during connect (<_realsocket at 0x1b type=client open_count=1 channel=[id: 0xb5269a51, 0.0.0.0/0.0.0.0:55527] timeout=300.0>) 2022-02-25 06:17:50,796 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.35:8091 during connect (<_realsocket at 0x1c type=client open_count=1 channel=[id: 0x29db4f76, 0.0.0.0/0.0.0.0:44097] timeout=300.0>) 2022-02-25 06:17:52,815 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.35:8091 during connect (<_realsocket at 0x1d type=client open_count=1 channel=[id: 0x13b5eabb, 0.0.0.0/0.0.0.0:48525] timeout=300.0>) 2022-02-25 06:17:54,828 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.35:8091 during connect (<_realsocket at 0x1e type=client open_count=1 channel=[id: 0x89ff0935, 0.0.0.0/0.0.0.0:36237] timeout=300.0>) 2022-02-25 06:17:56,841 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.35:8091 during connect (<_realsocket at 0x1f type=client open_count=1 channel=[id: 0x2856b07d, 0.0.0.0/0.0.0.0:36369] timeout=300.0>) 2022-02-25 06:17:58,854 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.35:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.36:8091 during connect (<_realsocket at 0x20 type=client open_count=1 channel=[id: 0x9bf8e9f7, 0.0.0.0/0.0.0.0:56444] timeout=300.0>) 2022-02-25 06:18:04,036 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.36:8091 during connect (<_realsocket at 0x21 type=client open_count=1 channel=[id: 0x63271d63, 0.0.0.0/0.0.0.0:46954] timeout=300.0>) 2022-02-25 06:18:06,053 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.36:8091 during connect (<_realsocket at 0x22 type=client open_count=1 channel=[id: 0x165f7e28, 0.0.0.0/0.0.0.0:43711] timeout=300.0>) 2022-02-25 06:18:08,065 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.36:8091 during connect (<_realsocket at 0x23 type=client open_count=1 channel=[id: 0xb9703e44, 0.0.0.0/0.0.0.0:46078] timeout=300.0>) 2022-02-25 06:18:10,078 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.36:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.37:8091 during connect (<_realsocket at 0x24 type=client open_count=1 channel=[id: 0x8c4ef0a0, 0.0.0.0/0.0.0.0:50923] timeout=300.0>) 2022-02-25 06:18:16,170 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.37:8091 during connect (<_realsocket at 0x25 type=client open_count=1 channel=[id: 0x7aefbe7d, 0.0.0.0/0.0.0.0:56635] timeout=300.0>) 2022-02-25 06:18:18,184 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.37:8091 during connect (<_realsocket at 0x26 type=client open_count=1 channel=[id: 0x1c2d6064, 0.0.0.0/0.0.0.0:46142] timeout=300.0>) 2022-02-25 06:18:20,197 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused Got this failure java.net.ConnectException: Connection refused: /172.23.100.37:8091 during connect (<_realsocket at 0x27 type=client open_count=1 channel=[id: 0x5b03d9ed, 0.0.0.0/0.0.0.0:44981] timeout=300.0>) 2022-02-25 06:18:22,210 | test | ERROR | MainThread | [rest_client:_http_request:828] Socket error while connecting to http://172.23.100.37:8091/nodes/self error [Errno 111] Connection refused 2022-02-25 06:18:59,581 | test | WARNING | MainThread | [basetestcase:_initialize_nodes:709] RAM quota was defined less than 100 MB: 2022-02-25 06:19:12,091 | test | INFO | MainThread | [basetestcase:initialize_cluster:438] Cluster C1 initialized 2022-02-25 06:19:12,092 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup finished for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:19:12,111 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= ClusterSetup setup started for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:19:33,938 | test | INFO | pool-3-thread-8 | [table_view:display:72] Rebalance Overview +----------------+----------+-----------------------+----------------+--------------+ | Nodes | Services | Version | CPU | Status | +----------------+----------+-----------------------+----------------+--------------+ | 172.23.105.164 | kv | 7.1.0-2385-enterprise | 0.626095667418 | Cluster node | | 172.23.105.206 | None | | | <--- IN --- | | 172.23.106.177 | None | | | <--- IN --- | | 172.23.100.34 | None | | | <--- IN --- | | 172.23.100.35 | None | | | <--- IN --- | +----------------+----------+-----------------------+----------------+--------------+ 2022-02-25 06:19:33,950 | test | INFO | pool-3-thread-8 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 06:19:48,986 | test | INFO | pool-3-thread-8 | [task:check:384] Rebalance - status: none, progress: 100 2022-02-25 06:19:49,000 | test | INFO | pool-3-thread-8 | [task:check:443] Rebalance completed with progress: 100% in 15.0629999638 sec 2022-02-25 06:19:49,121 | test | INFO | MainThread | [table_view:display:72] Cluster statistics +----------------+----------+-----------------+-----------+-----------+----------------------+-------------------+-----------------------+ | Node | Services | CPU_utilization | Mem_total | Mem_free | Swap_mem_used | Active / Replica | Version | +----------------+----------+-----------------+-----------+-----------+----------------------+-------------------+-----------------------+ | 172.23.100.34 | kv | 0.187711175072 | 11.45 GiB | 10.67 GiB | 0.0 Byte / 3.50 GiB | 0 / 0 | 7.1.0-2385-enterprise | | 172.23.105.206 | kv | 0.713570355533 | 11.45 GiB | 10.69 GiB | 0.0 Byte / 3.50 GiB | 0 / 0 | 7.1.0-2385-enterprise | | 172.23.106.177 | kv | 0.639017666959 | 11.45 GiB | 10.70 GiB | 23.90 MiB / 3.50 GiB | 0 / 0 | 7.1.0-2385-enterprise | | 172.23.100.35 | kv | 0.13770655984 | 11.45 GiB | 10.64 GiB | 0.0 Byte / 3.50 GiB | 0 / 0 | 7.1.0-2385-enterprise | | 172.23.105.164 | kv | 0.425958406414 | 11.45 GiB | 10.70 GiB | 0.0 Byte / 3.50 GiB | 0 / 0 | 7.1.0-2385-enterprise | +----------------+----------+-----------------+-----------+-----------+----------------------+-------------------+-----------------------+ 2022-02-25 06:19:49,121 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= ClusterSetup setup complete for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:19:49,122 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionBase setup started for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 06:19:49,289 | test | INFO | MainThread | [bucket_ready_functions:create_buckets_using_json_data:1894] Creating required buckets from template 2022-02-25 06:19:49,474 | test | INFO | MainThread | [bucket_ready_functions:wait_for_collection_creation_to_complete:2484] Waiting for all collections to be created 2022-02-25 06:19:49,657 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+-----------------+----------+------------+-----+-------+-----------+----------+-----------+-----+ | Bucket | Type | Storage Backend | Replicas | Durability | TTL | Items | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+-----------------+----------+------------+-----+-------+-----------+----------+-----------+-----+ | default | couchbase | magma | 3 | none | 0 | 0 | 3.75 GiB | 0.0 Byte | 0.0 Byte | 100 | +---------+-----------+-----------------+----------+------------+-----+-------+-----------+----------+-----------+-----+ 2022-02-25 06:19:49,657 | test | INFO | MainThread | [collections_base:collection_setup:166] Creating required SDK clients for client_pool 2022-02-25 06:19:53,513 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4861] Performing scope/collection specific operations 2022-02-25 06:19:53,526 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4951] Done Performing scope/collection specific operations 2022-02-25 07:09:00,717 | test | INFO | pool-3-thread-11 | [table_view:display:72] Ops trend for bucket 'default' +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ | 66712.900 | ......................................................................................********..........********...*******X | 76583.300 | | 76468.400 | ..................................................................................................*******X | 76468.400 | | 70400.200 | ..........................................................................................*******X | 70400.200 | | 59479.300 | ............................................................................********.........*******X | 66328.800 | | 64675.000 | ...................................................................................*******X | 64675.000 | | 62091.900 | ................................................................................********.....******X | 66559.700 | | 62472.800 | ................................................................................X | 62472.800 | | 55325.900 | .......................................................................*******X | 55325.900 | | 38872.700 | .................................................*******................................********........*******X | 69131.000 | | 62907.500 | .................................................................................********.*******X | 64197.900 | | 62131.000 | ................................................................................*******X | 62131.000 | | 55370.800 | .......................................................................********..*******..........********..*******X | 65909.900 | | 65816.800 | ....................................................................................*******X | 65816.800 | | 56347.900 | ........................................................................********...............*******X | 68023.400 | | 62657.900 | ................................................................................*******X | 62657.900 | | 56560.000 | ........................................................................*******.*******X | 57320.100 | | 50264.700 | ................................................................********..........********.....*******X | 61662.900 | | 52017.800 | ..................................................................********...*******X | 53802.000 | | 53744.700 | .....................................................................********...........*******X | 62357.900 | | 60200.500 | .............................................................................********.******X | 60681.400 | | 52605.700 | ...................................................................********.......*******X | 57783.300 | | 53657.700 | .....................................................................********...............*******X | 65433.500 | | 64801.500 | ...................................................................................******X | 64801.500 | | 61897.900 | ...............................................................................*******X | 61897.900 | | 55728.200 | .......................................................................*******X | 55728.200 | | 53750.400 | .....................................................................********.********...********....*******X | 60131.000 | | 53994.700 | .....................................................................*******X | 53994.700 | | 52330.000 | ...................................................................*******.....********..........*******X | 63908.200 | | 60864.000 | ..............................................................................*******X | 60864.000 | | 56518.200 | ........................................................................********.......******X | 61789.400 | | 57447.500 | ..........................................................................********....*******X | 60637.500 | | 57064.400 | .........................................................................*******X | 57064.400 | | 47754.300 | .............................................................********...........********....*******X | 59385.700 | | 55913.700 | ........................................................................*******X | 55913.700 | | 51378.400 | ..................................................................*******X | 51378.400 | | 48690.200 | ..............................................................******X | 48690.200 | | 48521.400 | ..............................................................X | 48521.400 | | 45278.900 | ..........................................................*******..********..........*******X | 55089.000 | | 52638.200 | ...................................................................********...*******X | 54702.000 | | 51901.100 | ..................................................................********..*******.*******X | 53962.600 | | 44873.900 | .........................................................*******X | 44873.900 | | 43845.300 | ........................................................********...********.........*******X | 53329.300 | | 50621.800 | .................................................................*******X | 50621.800 | | 40056.500 | ...................................................*******X | 40056.500 | | 34417.600 | ...........................................********........********......********........*******X | 51015.900 | | 50897.500 | .................................................................*******X | 50897.500 | | 39311.100 | ..................................................*******X | 39311.100 | | 35505.000 | .............................................********.....*******X | 39175.800 | | 38367.700 | .................................................*******.*....*******..............********..........*******X | 60950.800 | | 37467.200 | ...............................................*******X | 37467.200 | | 36231.500 | ..............................................********...............*******X | 47806.000 | | 38464.800 | .................................................********.......*******X | 43742.200 | | 39640.400 | ..................................................********.*******X | 40222.900 | | 37833.200 | ................................................********................*******X | 50073.800 | | 45705.400 | ..........................................................********.......*******X | 50756.900 | | 42790.500 | ......................................................******X | 42790.500 | | 41207.300 | ....................................................X | 41207.300 | | 38749.100 | .................................................*******...*******X | 41318.700 | | 36255.700 | ..............................................*******X | 36255.700 | | 28686.700 | ....................................*********.....*******....********.........********........*******X | 48981.900 | | 40575.100 | ...................................................*******X | 40575.100 | | 37924.600 | ................................................*******X | 37924.600 | | 34106.300 | ...........................................********...........*******X | 42528.400 | | 40500.500 | ...................................................********.*******X | 40885.200 | | 37984.800 | ................................................********........*******X | 44290.100 | | 36783.000 | ...............................................*******X | 36783.000 | | 29746.700 | .....................................********......********........********....********........*******X | 49730.500 | | 42712.200 | ......................................................******X | 42712.200 | | 35097.300 | ............................................********......*******X | 39727.700 | | 34810.100 | ............................................********.................********.....*******X | 51519.700 | | 46255.000 | ...........................................................*******X | 46255.000 | | 39872.800 | ...................................................*******X | 39872.800 | | 31335.400 | .......................................*******X | 31335.400 | | 30804.000 | .......................................*******X | 30804.000 | | 29571.900 | .....................................********.......********.*******X | 35701.300 | | 34558.100 | ............................................*******X | 34558.100 | | 33447.100 | ..........................................********............*******X | 42505.700 | | 33117.800 | ..........................................********.......********......*******X | 43565.900 | | 36775.700 | ...............................................********....*******X | 40244.900 | | 34869.500 | ............................................*******X | 34869.500 | | 29639.000 | .....................................********..********......********...........*******X | 43953.200 | | 36960.300 | ...............................................******X | 36960.300 | | 36339.700 | ..............................................X | 36339.700 | | 35852.800 | .............................................*******X | 35852.800 | | 34500.400 | ............................................*******.*.******X | 36756.400 | | 35442.300 | .............................................X | 35442.300 | | 29907.500 | ......................................*******....*******X | 33204.300 | | 32923.800 | .........................................********...............*******X | 44388.000 | | 41245.600 | ....................................................********....*******X | 43762.300 | | 31586.000 | ........................................*******X | 31586.000 | | 26873.100 | ..................................********.....********.......*******X | 36463.500 | | 33795.500 | ...........................................*******X | 33795.500 | | 32502.900 | .........................................*******X | 32502.900 | | 31654.600 | ........................................********.....***************X | 35724.300 | | 33170.800 | ..........................................*******X | 33170.800 | | 29584.500 | .....................................*******X | 29584.500 | | 28358.200 | ....................................********......********....*******..*........*******X | 43679.300 | | 39173.400 | ..................................................********.******X | 40371.100 | | 35031.900 | ............................................*******X | 35031.900 | | 32306.600 | .........................................********..********...********..*******X | 38049.000 | | 31689.600 | ........................................********..........*******X | 39065.200 | | 34708.000 | ............................................*******X | 34708.000 | | 33764.600 | ...........................................***************X | 33868.500 | | 31104.400 | .......................................*******X | 31104.400 | | 27919.600 | ...................................********........********.......*******X | 39298.700 | | 34607.500 | ............................................********.....*******X | 38711.500 | | 35243.800 | .............................................***************X | 35879.200 | | 32396.900 | .........................................********.......*******X | 37592.400 | | 35896.800 | .............................................********..........*******X | 43048.700 | | 32185.100 | .........................................********......****************.*******X | 37713.800 | | 33456.500 | ..........................................*******X | 33456.500 | | 29776.700 | .....................................********.....*******.........*******X | 40447.900 | | 38462.000 | .................................................********.*******X | 39401.200 | | 34850.500 | ............................................********........*******X | 41237.100 | | 31475.900 | ........................................********.....********......*******X | 40375.700 | | 39447.000 | ..................................................*******X | 39447.000 | | 38737.200 | .................................................*******X | 38737.200 | | 34879.100 | ............................................********...********.*******X | 37718.100 | | 29980.700 | ......................................********........*******X | 36301.000 | | 35487.200 | .............................................****************.............*******X | 45725.800 | | 31977.900 | ........................................*******X | 31977.900 | | 29270.000 | .....................................********......********...******X | 36198.800 | | 35164.300 | ............................................X | 35164.300 | | 33628.300 | ..........................................********.*******.......********....*******X | 42323.400 | | 37059.000 | ...............................................*******X | 37059.000 | | 35479.100 | .............................................********....*******X | 38915.900 | | 37840.000 | ................................................*******X | 37840.000 | | 34275.100 | ...........................................********.********.****************..*******X | 37040.100 | | 31030.100 | .......................................********...*******X | 33279.200 | | 31984.600 | ........................................********..********.....*******X | 36962.900 | | 34504.500 | ............................................*******X | 34504.500 | | 30332.900 | ......................................********.********.....*******X | 35084.500 | | 33104.800 | ..........................................********...********...........*******X | 43748.600 | | 32904.500 | .........................................********.......******X | 37807.300 | | 36914.200 | ...............................................X | 36914.200 | | 33981.800 | ...........................................*******.......********...*******X | 41709.300 | | 35813.900 | .............................................********.*******X | 36060.200 | | 34108.900 | ...........................................********..***************X | 35848.800 | | 34208.500 | ...........................................*******X | 34208.500 | | 33149.400 | ..........................................*******X | 33149.400 | | 32443.300 | .........................................*******X | 32443.300 | | 31715.700 | ........................................********......*******X | 36279.800 | | 31092.400 | .......................................********..*******...*...******X | 36908.000 | | 35703.400 | .............................................X | 35703.400 | | 31033.200 | .......................................*******X | 31033.200 | | 27863.300 | ...................................********......*******X | 32166.300 | | 29145.300 | .....................................********......********......******X | 38552.500 | | 38540.800 | .................................................***************X | 39012.000 | | 34915.600 | ............................................*******X | 34915.600 | | 33288.700 | ..........................................********..********...*******X | 37291.500 | | 30536.800 | ......................................********....********.********.*******X | 34541.800 | | 33960.100 | ...........................................*******X | 33960.100 | | 30539.300 | ......................................********.********...***************X | 33610.000 | | 24585.600 | ...............................*******X | 24585.600 | | 18840.700 | .......................*******X | 18840.700 | | 10201.700 | ............*****X | 10201.700 | +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 07:09:00,769 | test | INFO | MainThread | [table_view:display:72] Cluster statistics +----------------+----------+-----------------+-----------+----------+----------------------+---------------------+-----------------------+ | Node | Services | CPU_utilization | Mem_total | Mem_free | Swap_mem_used | Active / Replica | Version | +----------------+----------+-----------------+-----------+----------+----------------------+---------------------+-----------------------+ | 172.23.100.34 | kv | 59.4822089019 | 11.45 GiB | 9.24 GiB | 0.0 Byte / 3.50 GiB | 24425951 / 73278558 | 7.1.0-2385-enterprise | | 172.23.105.206 | kv | 37.1611253197 | 11.45 GiB | 9.46 GiB | 0.0 Byte / 3.50 GiB | 24419383 / 73255750 | 7.1.0-2385-enterprise | | 172.23.106.177 | kv | 56.2849162011 | 11.45 GiB | 9.46 GiB | 23.90 MiB / 3.50 GiB | 24420383 / 73141468 | 7.1.0-2385-enterprise | | 172.23.100.35 | kv | 63.3307868602 | 11.45 GiB | 9.19 GiB | 0.0 Byte / 3.50 GiB | 24307061 / 73159216 | 7.1.0-2385-enterprise | | 172.23.105.164 | kv | 40.9050350542 | 11.45 GiB | 9.36 GiB | 0.0 Byte / 3.50 GiB | 24425561 / 73161091 | 7.1.0-2385-enterprise | +----------------+----------+-----------------+-----------+----------+----------------------+---------------------+-----------------------+ 2022-02-25 07:09:09,918 | test | INFO | MainThread | [bucket_ready_functions:validate_docs_per_collections_all_buckets:4615] Validating collection stats and item counts 2022-02-25 07:09:17,802 | test | ERROR | MainThread | [bucket_ready_functions:validate_manifest_uid:4478] Bucket UID mismatch. Expected: 0, Actual: 1 2022-02-25 07:09:17,802 | test | WARNING | MainThread | [bucket_ready_functions:validate_docs_per_collections_all_buckets:4632] Bucket manifest UID mismatch! 2022-02-25 07:09:22,315 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+-----------------+----------+------------+-----+-----------+-----------+----------+------------+---------------+ | Bucket | Type | Storage Backend | Replicas | Durability | TTL | Items | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+-----------------+----------+------------+-----+-----------+-----------+----------+------------+---------------+ | default | couchbase | magma | 3 | none | 0 | 122100000 | 3.75 GiB | 2.72 GiB | 255.22 GiB | 1.44628828829 | +---------+-----------+-----------------+----------+------------+-----+-----------+-----------+----------+------------+---------------+ 2022-02-25 07:09:22,315 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionBase setup complete for test #1 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 07:09:25,210 | test | INFO | MainThread | [collections_rebalance:load_collections_with_rebalance:897] Doing collection data load during graceful_failover_recovery 2022-02-25 07:09:25,211 | test | INFO | MainThread | [collections_rebalance:rebalance_operation:358] Starting rebalance operation of type : graceful_failover_recovery 2022-02-25 07:09:25,213 | test | INFO | MainThread | [collections_rebalance:rebalance_operation:699] failing over nodes [ip:172.23.100.34 port:8091 ssh_username:root] 2022-02-25 07:09:40,404 | test | INFO | pool-3-thread-30 | [rest_client:monitorRebalance:1575] Rebalance done. Taken 15.0880000591 seconds to complete 2022-02-25 07:09:40,410 | test | INFO | pool-3-thread-30 | [common_lib:sleep:22] Sleep 5 seconds. Reason: Wait after rebalance complete 2022-02-25 07:14:45,411 | test | INFO | MainThread | [collections_rebalance:data_load_after_failover:327] Starting a sync data load after failover 2022-02-25 07:14:45,417 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4861] Performing scope/collection specific operations 2022-02-25 07:14:45,594 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4951] Done Performing scope/collection specific operations 2022-02-25 07:17:39,250 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11101 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_KMkIsgTwUxKmqSTCiP5U_j%J5KEK8Fyx6XMASh5qpP5p5muy4GJ1Io2HJE-gd1ausVIV0lb-r-49-318000_mxFhART7TJ6sH%HVzL0dmLfhEvuw2NBu1bejLzDZxMISnYw8V5Rurg3xSehZpxGXI0UD7zVPukKlucDy9sGPQGQ5qyx2yD_jHzzfmQC4pG58IaW-fAVLcCPFsZGe6_LcXWr1kUFy-49-318000_ttl=0_read__0_610500_1645802085.67 2022-02-25 07:17:40,996 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11577 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default__default__default_ttl=0_read__0_610500_1645802085.67 2022-02-25 07:17:41,002 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11973 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_OW7buhQPReEpZaBFN0FKyCnq9Ath-%oCY-eSQj0yly_zJrTAe_BWpYPdKVDNY_VZU7gLebFqK5EK7kKMrZs1OmFCnQGU_Kd0m8JUfkaU%-wQv9rJjXdhFG9oQWlOKcR0U7N3sphp_uRc8tXc6DiB%VnClJ_sidImeJichmU5zrUq0D%YOhAZvrBl2pVrn9Qpff4F3A_t2W52QFyyeGOA9l0N2EPRgkWmHta-49-315000_4sFTemUJM07L3_e710gloMAHX3pNBU53p6ay39k0ofdO0Zy1oWiiY%Rpb5C3Hd-oyebseNos_mg%G0VsI21vpWUHbafb-8psiJ667-V5NM2wolkbyOvnSLAq_p_8E2EQcIjAG-y4p0xXaInQf8ybn9z9aN-EqyjD4C8dP%ftmtkgar-49-315000_ttl=0_read__0_610500_1645802085.67 2022-02-25 07:17:41,005 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11475 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_Wz0wjvwVTPi5v-dDb-49-318000_YGdwMOQlhlHZoS6H-ODQaT-_Ykh%A0_SVUCUaMDoydwMYWlbXKcZo_vX8tHwEPqqa_f-oE%5Bjm170I_K489P5lBKAy9U66Wp0ka7J3huTk9MYEdvjptGRRgb0tMZMlxcLogSt74rHu7NGghdMxh6btppSJlBO1YUdTnMlbZ_UQzSQB4J3-49-319000_ttl=0_read__0_610500_1645802085.67 2022-02-25 07:17:41,009 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11231 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_mqYpZfiMgxLhCyDl64ncBnEtU2SF1sVJBFetmCeMsPzMmfWy%fWerIqY-SnLHJgLDxrK_--Bnyirq-49-320000_64p7CaHQPyPslLkFqV7fuI-ssc1Um%6MBBzMlFczwI-JPbooJHKxtLg6VqdLkFYSiDe_oeUal9FoPhhRjiC82q6EEL_T-yoLDerfXJ7b%3tRndLqWJmGuT3LkjKQIry0StbNc1CTGlt3iElXPuJ-49-320000_ttl=0_read__0_610500_1645802085.67 2022-02-25 07:17:41,013 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11267 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_EDFqdxaZS-NNULc5Q-eyXitvgSWn59pRTXJE8tSXy_Be85tKrOmODdUQ496N_Rlp2o13tk7A8BAZ0dZTzMMox%GrUntpK1SoFowd2X1iY3TF-fyzvY-DXi9hNTS5n5mbmyxTa30-my%XlA8G6JakBRJrFD-fLJ%MQROdFPAoSPTDCdj45lyMdv3LYmPNfahSA-49-317000_0fSNocKXUl__KLVDQFb-49-318000_ttl=0_read__0_610500_1645802085.68 2022-02-25 07:17:41,016 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11221 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_arUbz4EFiIcxlYQE-t33Y8qJSEJGYjlCbUoEc2LCpJK-czdFJcXbf-1a5ro9P9oAOMys22A1PzTvNsmTFbgvocFAv15roNTK_XsW-Z9m6Xoqr0gZN_E6IUcMxJUImCHj6ErNPZBPu0aw25TL0igagd1__oNA%BY6Xo24GnavWPqryWIv2sk-N7HrNWB0YgSFs8hdblP-49-316000_Og6Ns-TpDflMGxiNG1MRWOKc3tGHGon2qKvXLVfiZC1GxMs38g7jCnni-DElUm%GwrqkGj0K7C8fJ90g_HM4GQG1c4TzKEnmZX9Xwj9%WoPEdkxKw4KJNsmeoGEikve7WGQ-49-316000_ttl=0_read__0_610500_1645802085.68 2022-02-25 07:24:53,875 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 10 docs from 0 to 1221000 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default__default__default_ttl=0_update__0_1221000_1645802085.68 2022-02-25 07:24:53,878 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 11371 docs from 0 to 610500 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_F2IXD9d94QIljOt_eVPAOO3T3-y84AvgBdU6ne2hL8-sWR6yhpqNqG1aA0TKgFDAEkgTted7swz6_ybSlW-DIwWgBlE0PnyMCxvyTM8QfvE-j%MLqLCUqAg_EivKcH4isFoRkAdiRuuGBeTYZDu-_cQ7XAOIpTi9oHyDKGXB9vIkNUP41gpDq59F3sGnRW2r_A7TCvQX52ICt6-qAlN7Kz7BRBL1CVv-49-311000_teKP2CHXO0pVsdeBpB1igx8hRkJ09Ps55Q%lBlz75c84m7KGTbTsb-acC9sxJvzMsevIX36lGZw5AYB_4vn4_atPucbtQYChsZfl7%3bvNg6lSePgRuQQeWCDH7i9KIoA3d-u0gLwyuB%lXbLMr%7Jit-CVF5OVGXgXTWoOedQC1pgqUul-u6un--49-313000_ttl=0_read__0_610500_1645802085.68 2022-02-25 07:25:17,088 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 4 docs from 0 to 1221000 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_arUbz4EFiIcxlYQE-t33Y8qJSEJGYjlCbUoEc2LCpJK-czdFJcXbf-1a5ro9P9oAOMys22A1PzTvNsmTFbgvocFAv15roNTK_XsW-Z9m6Xoqr0gZN_E6IUcMxJUImCHj6ErNPZBPu0aw25TL0igagd1__oNA%BY6Xo24GnavWPqryWIv2sk-N7HrNWB0YgSFs8hdblP-49-316000_Og6Ns-TpDflMGxiNG1MRWOKc3tGHGon2qKvXLVfiZC1GxMs38g7jCnni-DElUm%GwrqkGj0K7C8fJ90g_HM4GQG1c4TzKEnmZX9Xwj9%WoPEdkxKw4KJNsmeoGEikve7WGQ-49-316000_ttl=0_update__0_1221000_1645802085.68 2022-02-25 07:27:24,678 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 15 docs from 12210000 to 13431000 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default__default__default_ttl=0_create__12210000_13431000_1645802085.68 2022-02-25 07:27:24,680 | test | ERROR | pool-3-thread-1 | [task:execute_tasks:3922] Failed to load 9 docs from 0 to 1221000 of thread_name LoadDocs_MutateDocsFromSpecTask_1645802085.65_default_F2IXD9d94QIljOt_eVPAOO3T3-y84AvgBdU6ne2hL8-sWR6yhpqNqG1aA0TKgFDAEkgTted7swz6_ybSlW-DIwWgBlE0PnyMCxvyTM8QfvE-j%MLqLCUqAg_EivKcH4isFoRkAdiRuuGBeTYZDu-_cQ7XAOIpTi9oHyDKGXB9vIkNUP41gpDq59F3sGnRW2r_A7TCvQX52ICt6-qAlN7Kz7BRBL1CVv-49-311000_teKP2CHXO0pVsdeBpB1igx8hRkJ09Ps55Q%lBlz75c84m7KGTbTsb-acC9sxJvzMsevIX36lGZw5AYB_4vn4_atPucbtQYChsZfl7%3bvNg6lSePgRuQQeWCDH7i9KIoA3d-u0gLwyuB%lXbLMr%7Jit-CVF5OVGXgXTWoOedQC1pgqUul-u6un--49-313000_ttl=0_update__0_1221000_1645802085.69 2022-02-25 07:27:39,471 | test | INFO | pool-3-thread-2 | [table_view:display:72] Ops trend for bucket 'default' +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ | 0.000 | ******.............................................................................................*******X | 76273.100 | | 59647.100 | ........................................................................*******X | 59647.100 | | 51819.300 | ..............................................................******X | 51819.300 | | 35607.100 | ..........................................X | 35607.100 | | 30707.000 | ....................................******X | 30707.000 | | 29584.200 | ...................................********..........*******X | 37784.800 | | 32010.000 | ......................................********..*******....*******X | 37475.800 | | 32462.500 | ......................................*******X | 32462.500 | | 30693.000 | ....................................********.......................********........................................*******X | 81707.000 | | 79189.100 | ...............................................................................................*******X | 79189.100 | | 72180.000 | .......................................................................................********.........*******X | 79964.200 | | 68139.700 | ..................................................................................*******X | 68139.700 | | 54411.400 | .................................................................*******X | 54411.400 | | 510.000 | ********................********.....................................................*******X | 56988.300 | | 22503.600 | ..........................********..................................................*******X | 63300.500 | | 54112.300 | .................................................................********...*********.*******X | 57761.000 | | 47574.100 | .........................................................*******X | 47574.100 | | 38338.200 | .............................................X | 38338.200 | | 11298.300 | ............******X | 11298.300 | | 10770.600 | ............********...........................********................................*******X | 59405.600 | | 58149.100 | ......................................................................********...********X | 61024.900 | | 24904.200 | .............................*******X | 24904.200 | | 20311.600 | .......................********..................********...********....********..................*******X | 54989.400 | | 54804.500 | ..................................................................***************X | 54814.000 | | 51905.500 | ..............................................................********...********X | 54026.800 | | 53578.000 | ................................................................*******X | 53578.000 | | 9884.800 | ...........********......********.............********.....................********.................*******X | 56546.100 | | 47985.600 | .........................................................*******X | 47985.600 | | 31573.700 | .....................................********............................********X | 54616.900 | | 42346.200 | ..................................................*******X | 42346.200 | | 1382.800 | ********................********..............********.............********...............................********X | 61645.800 | | 47832.400 | .........................................................********........*******X | 54004.600 | | 39299.700 | ...............................................*******X | 39299.700 | | 16402.800 | ...................*******X | 16402.800 | | 8642.900 | .........********..............********.......********.........*******X | 33071.200 | | 31559.000 | .....................................********.********........*******X | 39047.100 | | 34564.700 | .........................................*******X | 34564.700 | | 30887.900 | ....................................*******X | 30887.900 | | 28675.800 | ..................................********....***********************X | 32120.600 | | 28219.300 | .................................*******X | 28219.300 | | 22588.700 | ..........................****X | 22588.700 | +-----------+-----------------------------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 07:27:53,562 | test | WARNING | MainThread | [rest_client:get_nodes:1847] 172.23.100.34 - Node not part of cluster inactiveFailed 2022-02-25 07:27:53,582 | test | WARNING | MainThread | [rest_client:get_nodes:1847] 172.23.100.34 - Node not part of cluster inactiveFailed 2022-02-25 07:27:56,178 | test | INFO | pool-3-thread-3 | [table_view:display:72] Rebalance Overview +----------------+----------+-----------------------+----------------+--------------+ | Nodes | Services | Version | CPU | Status | +----------------+----------+-----------------------+----------------+--------------+ | 172.23.100.34 | kv | 7.1.0-2385-enterprise | 0.212765957447 | Cluster node | | 172.23.105.206 | kv | 7.1.0-2385-enterprise | 24.8571791291 | Cluster node | | 172.23.106.177 | kv | 7.1.0-2385-enterprise | 31.6183943089 | Cluster node | | 172.23.100.35 | kv | 7.1.0-2385-enterprise | 54.952769977 | Cluster node | | 172.23.105.164 | kv | 7.1.0-2385-enterprise | 35.2156456418 | Cluster node | +----------------+----------+-----------------------+----------------+--------------+ 2022-02-25 07:27:56,187 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:06,210 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:16,234 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:26,269 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:36,298 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:46,325 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:28:56,355 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:06,384 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:16,413 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:26,441 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:36,470 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:46,497 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:29:56,526 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:06,555 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:16,582 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:26,611 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:36,640 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:46,667 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:30:56,697 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:06,724 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:16,753 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:26,782 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:36,812 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:46,839 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:31:56,868 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:32:06,897 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.37 2022-02-25 07:32:16,924 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 2.38 2022-02-25 07:32:26,951 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 24.52 2022-02-25 07:32:41,989 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: none, progress: 100 2022-02-25 07:32:42,003 | test | INFO | pool-3-thread-3 | [task:check:443] Rebalance completed with progress: 100% in 285.825999975 sec 2022-02-25 07:32:42,003 | test | INFO | MainThread | [collections_rebalance:rebalance_operation:699] failing over nodes [ip:172.23.100.35 port:8091 ssh_username:root] 2022-02-25 07:32:57,187 | test | INFO | pool-3-thread-6 | [rest_client:monitorRebalance:1575] Rebalance done. Taken 15.0840001106 seconds to complete 2022-02-25 07:32:57,191 | test | INFO | pool-3-thread-6 | [common_lib:sleep:22] Sleep 5 seconds. Reason: Wait after rebalance complete 2022-02-25 07:38:02,194 | test | INFO | MainThread | [collections_rebalance:data_load_after_failover:327] Starting a sync data load after failover 2022-02-25 07:38:02,194 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4861] Performing scope/collection specific operations 2022-02-25 07:38:02,328 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4951] Done Performing scope/collection specific operations 2022-02-25 07:41:15,937 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 10712 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_EDFqdxaZS-NNULc5Q-eyXitvgSWn59pRTXJE8tSXy_Be85tKrOmODdUQ496N_Rlp2o13tk7A8BAZ0dZTzMMox%GrUntpK1SoFowd2X1iY3TF-fyzvY-DXi9hNTS5n5mbmyxTa30-my%XlA8G6JakBRJrFD-fLJ%MQROdFPAoSPTDCdj45lyMdv3LYmPNfahSA-49-317000_0fSNocKXUl__KLVDQFb-49-318000_ttl=0_read__0_671550_1645803482.39 2022-02-25 07:41:15,941 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11543 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default__default__default_ttl=0_read__0_671550_1645803482.39 2022-02-25 07:41:15,946 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11647 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_mqYpZfiMgxLhCyDl64ncBnEtU2SF1sVJBFetmCeMsPzMmfWy%fWerIqY-SnLHJgLDxrK_--Bnyirq-49-320000_64p7CaHQPyPslLkFqV7fuI-ssc1Um%6MBBzMlFczwI-JPbooJHKxtLg6VqdLkFYSiDe_oeUal9FoPhhRjiC82q6EEL_T-yoLDerfXJ7b%3tRndLqWJmGuT3LkjKQIry0StbNc1CTGlt3iElXPuJ-49-320000_ttl=0_read__0_671550_1645803482.39 2022-02-25 07:41:19,625 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11121 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_OW7buhQPReEpZaBFN0FKyCnq9Ath-%oCY-eSQj0yly_zJrTAe_BWpYPdKVDNY_VZU7gLebFqK5EK7kKMrZs1OmFCnQGU_Kd0m8JUfkaU%-wQv9rJjXdhFG9oQWlOKcR0U7N3sphp_uRc8tXc6DiB%VnClJ_sidImeJichmU5zrUq0D%YOhAZvrBl2pVrn9Qpff4F3A_t2W52QFyyeGOA9l0N2EPRgkWmHta-49-315000_4sFTemUJM07L3_e710gloMAHX3pNBU53p6ay39k0ofdO0Zy1oWiiY%Rpb5C3Hd-oyebseNos_mg%G0VsI21vpWUHbafb-8psiJ667-V5NM2wolkbyOvnSLAq_p_8E2EQcIjAG-y4p0xXaInQf8ybn9z9aN-EqyjD4C8dP%ftmtkgar-49-315000_ttl=0_read__0_671550_1645803482.39 2022-02-25 07:41:19,630 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11232 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_F2IXD9d94QIljOt_eVPAOO3T3-y84AvgBdU6ne2hL8-sWR6yhpqNqG1aA0TKgFDAEkgTted7swz6_ybSlW-DIwWgBlE0PnyMCxvyTM8QfvE-j%MLqLCUqAg_EivKcH4isFoRkAdiRuuGBeTYZDu-_cQ7XAOIpTi9oHyDKGXB9vIkNUP41gpDq59F3sGnRW2r_A7TCvQX52ICt6-qAlN7Kz7BRBL1CVv-49-311000_teKP2CHXO0pVsdeBpB1igx8hRkJ09Ps55Q%lBlz75c84m7KGTbTsb-acC9sxJvzMsevIX36lGZw5AYB_4vn4_atPucbtQYChsZfl7%3bvNg6lSePgRuQQeWCDH7i9KIoA3d-u0gLwyuB%lXbLMr%7Jit-CVF5OVGXgXTWoOedQC1pgqUul-u6un--49-313000_ttl=0_read__0_671550_1645803482.4 2022-02-25 07:41:19,632 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11145 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_Wz0wjvwVTPi5v-dDb-49-318000_YGdwMOQlhlHZoS6H-ODQaT-_Ykh%A0_SVUCUaMDoydwMYWlbXKcZo_vX8tHwEPqqa_f-oE%5Bjm170I_K489P5lBKAy9U66Wp0ka7J3huTk9MYEdvjptGRRgb0tMZMlxcLogSt74rHu7NGghdMxh6btppSJlBO1YUdTnMlbZ_UQzSQB4J3-49-319000_ttl=0_read__0_671550_1645803482.4 2022-02-25 07:41:19,637 | test | ERROR | pool-3-thread-9 | [task:execute_tasks:3922] Failed to load 11497 docs from 0 to 671550 of thread_name LoadDocs_MutateDocsFromSpecTask_1645803482.38_default_arUbz4EFiIcxlYQE-t33Y8qJSEJGYjlCbUoEc2LCpJK-czdFJcXbf-1a5ro9P9oAOMys22A1PzTvNsmTFbgvocFAv15roNTK_XsW-Z9m6Xoqr0gZN_E6IUcMxJUImCHj6ErNPZBPu0aw25TL0igagd1__oNA%BY6Xo24GnavWPqryWIv2sk-N7HrNWB0YgSFs8hdblP-49-316000_Og6Ns-TpDflMGxiNG1MRWOKc3tGHGon2qKvXLVfiZC1GxMs38g7jCnni-DElUm%GwrqkGj0K7C8fJ90g_HM4GQG1c4TzKEnmZX9Xwj9%WoPEdkxKw4KJNsmeoGEikve7WGQ-49-316000_ttl=0_read__0_671550_1645803482.4 2022-02-25 07:54:36,285 | test | INFO | pool-3-thread-8 | [table_view:display:72] Ops trend for bucket 'default' +-----------+--------------------------------------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+--------------------------------------------------------------------------------------------------------------------------------------+-----------+ | 0.000 | *.........********........................................................................................*******X | 61624.200 | | 26872.300 | .........................................********......********......********...............********......***********************X | 47240.600 | | 46043.400 | ........................................................................*******X | 46043.400 | | 43295.500 | ...................................................................*******X | 43295.500 | | 40145.500 | ..............................................................*******X | 40145.500 | | 39221.700 | .............................................................********.......*******X | 43984.000 | | 43299.300 | ...................................................................********........*******X | 48198.400 | | 40936.500 | ................................................................********...................................*******X | 62892.400 | | 56523.700 | ........................................................................................******X | 56523.700 | | 54230.100 | .....................................................................................*******X | 54230.100 | | 45184.700 | ......................................................................********X | 45184.700 | | 45077.200 | ......................................................................*******X | 45077.200 | | 32625.900 | ..................................................*******X | 32625.900 | | 9065.400 | .............********.....********.............................********.********...............................********.....*******X | 53565.900 | | 50526.400 | ...............................................................................********X | 50526.400 | | 3590.300 | ....********...............................................................***************X | 43109.400 | | 29380.800 | .............................................*******X | 29380.800 | | 27988.900 | ...........................................********........********X | 33128.700 | | 26779.500 | .........................................*******X | 26779.500 | | 23302.000 | ....................................********.....................********....*******X | 39565.200 | | 31546.300 | .................................................*******X | 31546.300 | | 13953.700 | .....................********......********.........................*******X | 33417.500 | | 32047.700 | .................................................********...............................*******X | 51387.300 | | 48298.800 | ...........................................................................********...*********...*******X | 51838.100 | | 49893.400 | ..............................................................................********.......*******X | 54389.600 | | 38719.500 | ............................................................*******X | 38719.500 | | 19465.900 | .............................********X | 19465.900 | | 1388.000 | .********.......................********...............********............................................********X | 53096.700 | | 34898.600 | ......................................................********..........*******X | 40963.100 | | 36483.600 | .........................................................*******X | 36483.600 | | 23880.400 | ....................................********..********..................*******X | 36381.900 | | 17315.100 | ..........................*********....********..................********.....*******X | 34362.500 | | 20521.100 | ...............................*******X | 20521.100 | | 19873.400 | ..............................*********.*******X | 20439.200 | | 11097.300 | ................*******X | 11097.300 | | 10946.500 | ................********...*****************.......*******X | 17051.400 | | 5616.300 | .......********........*******X | 10332.600 | | 8946.200 | .............********......*********..********......*******X | 18100.900 | | 18058.700 | ...........................********.........*******X | 23349.400 | | 20752.500 | ...............................********..*******X | 21398.600 | | 20504.700 | ...............................*******X | 20504.700 | | 19010.500 | .............................********...*******X | 20892.400 | | 19458.500 | .............................********.***************X | 20032.700 | | 18775.300 | ............................********....********X | 21092.000 | | 18161.200 | ...........................*******X | 18161.200 | | 16426.900 | .........................********.....****************....*******X | 22305.200 | | 20596.500 | ...............................*******X | 20596.500 | | 11422.000 | .................*X | 11422.000 | +-----------+--------------------------------------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 07:54:37,487 | test | INFO | pool-3-thread-1 | [table_view:display:72] Ops trend for bucket 'default' +-----------+------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+------------------------------------------------------------------------------------------------------+-----------+ | 11422.000 | ...................................................................................................X | 11422.000 | +-----------+------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 07:54:51,923 | test | WARNING | MainThread | [rest_client:get_nodes:1847] 172.23.100.35 - Node not part of cluster inactiveFailed 2022-02-25 07:54:51,941 | test | WARNING | MainThread | [rest_client:get_nodes:1847] 172.23.100.35 - Node not part of cluster inactiveFailed 2022-02-25 07:54:54,357 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4861] Performing scope/collection specific operations 2022-02-25 07:54:54,467 | test | INFO | MainThread | [bucket_ready_functions:perform_tasks_from_spec:4951] Done Performing scope/collection specific operations 2022-02-25 07:54:54,522 | test | INFO | pool-3-thread-3 | [table_view:display:72] Rebalance Overview +----------------+----------+-----------------------+----------------+--------------+ | Nodes | Services | Version | CPU | Status | +----------------+----------+-----------------------+----------------+--------------+ | 172.23.100.34 | kv | 7.1.0-2385-enterprise | 22.1409658303 | Cluster node | | 172.23.105.206 | kv | 7.1.0-2385-enterprise | 16.1579212916 | Cluster node | | 172.23.106.177 | kv | 7.1.0-2385-enterprise | 3.84952978056 | Cluster node | | 172.23.100.35 | kv | 7.1.0-2385-enterprise | 0.237827012142 | Cluster node | | 172.23.105.164 | kv | 7.1.0-2385-enterprise | 27.3990223823 | Cluster node | +----------------+----------+-----------------------+----------------+--------------+ 2022-02-25 07:54:54,539 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:04,582 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:14,607 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:24,638 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:34,664 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:44,691 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:55:54,720 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:04,755 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:14,786 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:24,813 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:34,842 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:44,869 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:56:54,898 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:04,924 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:14,953 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:24,983 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:35,009 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:45,039 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:57:55,065 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:05,094 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:15,125 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:25,151 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:35,180 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:45,207 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:58:55,240 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:05,266 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:15,293 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:25,325 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:35,354 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:45,381 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 07:59:55,408 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 08:00:05,436 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 08:00:15,464 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.0 2022-02-25 08:00:25,493 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.31 2022-02-25 08:00:35,520 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 0.47 2022-02-25 08:00:45,548 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 1.26 2022-02-25 08:00:55,578 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 1.67 2022-02-25 08:01:05,605 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 1.99 2022-02-25 08:01:15,632 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 2.3 2022-02-25 08:01:25,663 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 2.3 2022-02-25 08:01:35,690 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 3.03 2022-02-25 08:01:45,721 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 3.63 2022-02-25 08:01:55,750 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 3.76 2022-02-25 08:02:05,776 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 3.92 2022-02-25 08:02:15,805 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 4.33 2022-02-25 08:02:25,834 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 4.49 2022-02-25 08:02:35,861 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 4.64 2022-02-25 08:02:45,888 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.09 2022-02-25 08:02:55,917 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.37 2022-02-25 08:03:05,957 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.37 2022-02-25 08:03:15,986 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.5 2022-02-25 08:03:26,015 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.79 2022-02-25 08:03:36,042 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 5.94 2022-02-25 08:03:46,072 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 6.1 2022-02-25 08:03:56,101 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 6.23 2022-02-25 08:04:06,128 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 6.67 2022-02-25 08:04:16,155 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 6.93 2022-02-25 08:04:26,184 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 7.65 2022-02-25 08:04:36,210 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 7.91 2022-02-25 08:04:46,239 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 8.32 2022-02-25 08:04:56,266 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 8.32 2022-02-25 08:05:06,295 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 8.92 2022-02-25 08:05:16,322 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 9.08 2022-02-25 08:05:26,354 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 9.99 2022-02-25 08:05:36,381 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 10.46 2022-02-25 08:05:46,407 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 11.06 2022-02-25 08:05:56,434 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 11.5 2022-02-25 08:06:06,461 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 11.79 2022-02-25 08:06:16,489 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 12.54 2022-02-25 08:06:26,516 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 13.02 2022-02-25 08:06:36,546 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 13.33 2022-02-25 08:06:46,572 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 14.19 2022-02-25 08:06:56,601 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 15.57 2022-02-25 08:07:06,630 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 16.17 2022-02-25 08:07:16,657 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 16.84 2022-02-25 08:07:26,690 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 17.12 2022-02-25 08:07:36,717 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 17.12 2022-02-25 08:07:46,746 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 17.69 2022-02-25 08:07:56,773 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 18.36 2022-02-25 08:08:06,799 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 19.77 2022-02-25 08:08:16,832 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 20.87 2022-02-25 08:08:26,861 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 23.05 2022-02-25 08:08:36,888 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 23.2 2022-02-25 08:08:46,915 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 23.52 2022-02-25 08:08:56,944 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 23.83 2022-02-25 08:09:06,973 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 32.78 2022-02-25 08:09:17,003 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 36.84 2022-02-25 08:09:27,039 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 41.67 2022-02-25 08:09:37,069 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 42.3 2022-02-25 08:09:44,756 | test | ERROR | pool-3-thread-6 | [task:execute_tasks:3922] Failed to load 20695 docs from 0 to 738705 of thread_name LoadDocs_MutateDocsFromSpecTask_1645804494.5_default_arUbz4EFiIcxlYQE-t33Y8qJSEJGYjlCbUoEc2LCpJK-czdFJcXbf-1a5ro9P9oAOMys22A1PzTvNsmTFbgvocFAv15roNTK_XsW-Z9m6Xoqr0gZN_E6IUcMxJUImCHj6ErNPZBPu0aw25TL0igagd1__oNA%BY6Xo24GnavWPqryWIv2sk-N7HrNWB0YgSFs8hdblP-49-316000_Og6Ns-TpDflMGxiNG1MRWOKc3tGHGon2qKvXLVfiZC1GxMs38g7jCnni-DElUm%GwrqkGj0K7C8fJ90g_HM4GQG1c4TzKEnmZX9Xwj9%WoPEdkxKw4KJNsmeoGEikve7WGQ-49-316000_ttl=0_read__0_738705_1645804494.51 2022-02-25 08:09:46,509 | test | ERROR | pool-3-thread-6 | [task:execute_tasks:3922] Failed to load 20229 docs from 0 to 738705 of thread_name LoadDocs_MutateDocsFromSpecTask_1645804494.5_default__default__default_ttl=0_read__0_738705_1645804494.51 2022-02-25 08:09:46,516 | test | ERROR | pool-3-thread-6 | [task:execute_tasks:3922] Failed to load 19475 docs from 0 to 738705 of thread_name LoadDocs_MutateDocsFromSpecTask_1645804494.5_default_mqYpZfiMgxLhCyDl64ncBnEtU2SF1sVJBFetmCeMsPzMmfWy%fWerIqY-SnLHJgLDxrK_--Bnyirq-49-320000_64p7CaHQPyPslLkFqV7fuI-ssc1Um%6MBBzMlFczwI-JPbooJHKxtLg6VqdLkFYSiDe_oeUal9FoPhhRjiC82q6EEL_T-yoLDerfXJ7b%3tRndLqWJmGuT3LkjKQIry0StbNc1CTGlt3iElXPuJ-49-320000_ttl=0_read__0_738705_1645804494.51 2022-02-25 08:09:46,522 | test | ERROR | pool-3-thread-6 | [task:execute_tasks:3922] Failed to load 20519 docs from 0 to 738705 of thread_name LoadDocs_MutateDocsFromSpecTask_1645804494.5_default_OW7buhQPReEpZaBFN0FKyCnq9Ath-%oCY-eSQj0yly_zJrTAe_BWpYPdKVDNY_VZU7gLebFqK5EK7kKMrZs1OmFCnQGU_Kd0m8JUfkaU%-wQv9rJjXdhFG9oQWlOKcR0U7N3sphp_uRc8tXc6DiB%VnClJ_sidImeJichmU5zrUq0D%YOhAZvrBl2pVrn9Qpff4F3A_t2W52QFyyeGOA9l0N2EPRgkWmHta-49-315000_4sFTemUJM07L3_e710gloMAHX3pNBU53p6ay39k0ofdO0Zy1oWiiY%Rpb5C3Hd-oyebseNos_mg%G0VsI21vpWUHbafb-8psiJ667-V5NM2wolkbyOvnSLAq_p_8E2EQcIjAG-y4p0xXaInQf8ybn9z9aN-EqyjD4C8dP%ftmtkgar-49-315000_ttl=0_read__0_738705_1645804494.52 2022-02-25 08:09:47,102 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 44.97 2022-02-25 08:09:53,747 | test | ERROR | pool-3-thread-6 | [task:execute_tasks:3922] Failed to load 19626 docs from 0 to 738705 of thread_name LoadDocs_MutateDocsFromSpecTask_1645804494.5_default_Wz0wjvwVTPi5v-dDb-49-318000_YGdwMOQlhlHZoS6H-ODQaT-_Ykh%A0_SVUCUaMDoydwMYWlbXKcZo_vX8tHwEPqqa_f-oE%5Bjm170I_K489P5lBKAy9U66Wp0ka7J3huTk9MYEdvjptGRRgb0tMZMlxcLogSt74rHu7NGghdMxh6btppSJlBO1YUdTnMlbZ_UQzSQB4J3-49-319000_ttl=0_read__0_738705_1645804494.52 2022-02-25 08:09:57,132 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 46.54 2022-02-25 08:10:07,164 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 47.32 2022-02-25 08:10:17,194 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 48.1 2022-02-25 08:10:27,229 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 48.42 2022-02-25 08:10:37,263 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 48.89 2022-02-25 08:10:47,290 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 49.05 2022-02-25 08:10:57,319 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 50.3 2022-02-25 08:11:07,345 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 52.65 2022-02-25 08:11:17,391 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 57.45 2022-02-25 08:11:27,430 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 66.38 2022-02-25 08:11:37,461 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 69.11 2022-02-25 08:11:47,500 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 75.47 2022-02-25 08:11:57,530 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: running, progress: 82.67 2022-02-25 08:12:12,571 | test | INFO | pool-3-thread-3 | [task:check:384] Rebalance - status: none, progress: 100 2022-02-25 08:12:12,585 | test | INFO | pool-3-thread-3 | [task:check:443] Rebalance completed with progress: 100% in 1038.06500006 sec 2022-02-25 08:21:35,868 | test | INFO | pool-3-thread-14 | [table_view:display:72] Ops trend for bucket 'default' +-----------+------------------------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+------------------------------------------------------------------------------------------------------------------------+-----------+ | 6947.700 | ..........*******.........................................................................................********X | 61234.800 | | 8667.900 | .............*******X | 8667.900 | | 7989.000 | ............*******X | 7989.000 | | 6332.400 | .........*********.********.*******X | 7408.500 | | 7195.300 | ..........*******X | 7195.300 | | 5534.300 | ........********X | 5534.300 | | 4447.800 | ......*******X | 4447.800 | | 3686.900 | .....*******X | 3686.900 | | 3616.500 | ....***************X | 3645.300 | | 2902.900 | ...********.*****************.*******X | 3939.900 | | 3489.300 | ....*********.***************X | 3918.400 | | 3770.400 | .....********.*********....*******X | 6776.500 | | 6129.700 | .........*****************..*******X | 7453.200 | | 5935.200 | ........*******X | 5935.200 | | 4714.200 | ......********X | 4714.200 | | 3921.200 | .....*******X | 3921.200 | | 3702.300 | .....*******X | 3702.300 | | 3242.200 | ....********.*********.********..*******X | 5629.400 | | 4145.900 | .....*********.*******X | 4730.900 | | 4100.500 | .....****************X | 4126.600 | | 3437.200 | ....***************X | 3570.500 | | 3024.600 | ...********.********X | 3623.400 | | 3575.000 | ....********.*******X | 4158.500 | | 3220.000 | ....*********..*******X | 4523.500 | | 3717.400 | .....********.********X | 4435.700 | | 3486.400 | ....********..*******X | 4666.000 | | 4161.100 | .....*******X | 4161.100 | | 4121.700 | .....X | 4121.700 | | 4090.700 | .....*******X | 4090.700 | | 3999.700 | .....*******X | 3999.700 | | 3827.400 | .....*******X | 3827.400 | | 3419.500 | ....*********..*******X | 4782.300 | | 4051.800 | .....********.*******X | 4358.700 | | 4214.400 | .....********X | 4214.400 | | 3712.700 | .....*******X | 3712.700 | | 2927.000 | ...********.****************X | 3613.200 | | 3536.400 | ....********..*******X | 4485.100 | | 4115.300 | .....*******X | 4115.300 | | 3517.600 | ....*********.*******X | 4076.500 | | 3523.600 | ....********.*******.*******X | 4671.600 | | 4115.900 | .....********.********.*******X | 4936.200 | | 4924.700 | .......X | 4924.700 | | 4010.700 | .....********..*******X | 5122.100 | | 3925.700 | .....*******X | 3925.700 | | 3656.000 | ....*********.********...*******X | 5675.400 | | 3798.900 | .....********.********X | 4594.300 | | 4125.800 | .....********...*******X | 5548.100 | | 3177.200 | ....*********.............................********.....................................********...............*******X | 52884.900 | | 47480.000 | ............................................................................*******X | 47480.000 | | 36001.400 | .........................................................*******X | 36001.400 | | 28283.200 | .............................................*******X | 28283.200 | | 24185.900 | ......................................********.*******X | 24692.200 | | 24398.300 | ......................................********..********X | 25114.800 | | 4493.700 | ......********.........********....................********......*******X | 25969.900 | | 24309.500 | ......................................*******X | 24309.500 | | 12102.300 | ..................********X | 12102.300 | | 11557.600 | .................********............*******X | 18948.000 | | 18325.300 | ............................********......................................*******X | 41428.700 | | 27157.100 | ...........................................*******X | 27157.100 | | 17547.900 | ...........................********.........*********.......********.....................*******X | 40161.500 | | 29101.700 | ..............................................********......********..........*******X | 39070.700 | | 25848.200 | .........................................********X | 25848.200 | | 20186.600 | ...............................********...*******X | 21555.100 | | 20477.400 | ................................********.............********.......................*******X | 42408.000 | | 31409.700 | ..................................................*******X | 31409.700 | | 25196.500 | ........................................********........................********X | 39808.500 | | 24989.400 | .......................................********..........*******X | 30627.400 | | 24808.700 | .......................................*******X | 24808.700 | | 16948.400 | ..........................********...*******X | 18930.500 | | 16527.100 | .........................********X | 16527.100 | | 7548.700 | ...........*******X | 7548.700 | | 73.600 | ********...............*********...........*******X | 16329.900 | | 13666.800 | .....................********..********............********..********X | 23563.400 | | 18389.000 | .............................********............*******X | 25968.800 | | 22579.200 | ...................................*******X | 22579.200 | | 20916.700 | .................................*******X | 20916.700 | | 15879.900 | ........................********...********.*******X | 18161.000 | | 17160.400 | ...........................****************X | 17502.400 | | 15900.800 | ........................*******X | 15900.800 | | 14957.700 | .......................********...*******X | 17141.600 | | 16737.400 | ..........................****************...********...*******X | 20438.900 | | 16753.300 | ..........................********.....*******X | 19742.700 | | 19180.000 | ..............................X | 19180.000 | | 16943.600 | ..........................********..********.*******X | 18443.400 | | 15356.400 | ........................********...*******X | 17382.700 | | 15380.500 | ........................*******X | 15380.500 | | 11566.200 | .................*X | 11566.200 | +-----------+------------------------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 08:21:37,115 | test | INFO | pool-3-thread-16 | [table_view:display:72] Ops trend for bucket 'default' +-----------+------------------------------------------------------------------------------------------------------+-----------+ | Min | Trend | Max | +-----------+------------------------------------------------------------------------------------------------------+-----------+ | 11566.200 | ...................................................................................................X | 11566.200 | +-----------+------------------------------------------------------------------------------------------------------+-----------+ 2022-02-25 08:21:58,851 | test | INFO | MainThread | [bucket_ready_functions:validate_docs_per_collections_all_buckets:4615] Validating collection stats and item counts 2022-02-25 08:22:06,723 | test | ERROR | MainThread | [bucket_ready_functions:validate_manifest_uid:4478] Bucket UID mismatch. Expected: 6, Actual: 7 2022-02-25 08:22:06,723 | test | WARNING | MainThread | [bucket_ready_functions:validate_docs_per_collections_all_buckets:4632] Bucket manifest UID mismatch! 2022-02-25 08:22:11,122 | test | INFO | MainThread | [table_view:display:72] Bucket statistics +---------+-----------+-----------------+----------+------------+-----+----------+-----------+----------+------------+---------------+ | Bucket | Type | Storage Backend | Replicas | Durability | TTL | Items | RAM Quota | RAM Used | Disk Used | ARR | +---------+-----------+-----------------+----------+------------+-----+----------+-----------+----------+------------+---------------+ | default | couchbase | magma | 3 | none | 0 | 81257550 | 3.75 GiB | 2.91 GiB | 231.52 GiB | 2.55540562077 | +---------+-----------+-----------------+----------+------------+-----+----------+-----------+----------+------------+---------------+ 2022-02-25 08:22:11,601 | test | INFO | MainThread | [collections_base:tearDown:82] Bucket: default, Active Resident ratio(DGM): 3% 2022-02-25 08:22:11,602 | test | INFO | MainThread | [collections_base:tearDown:85] Bucket: default, Replica Resident ratio(DGM): 0% ok ---------------------------------------------------------------------- Ran 1 test in 7583.067s OK During the test, Remote Connections: 125, Disconnections: 125 SDK Connections: 10, Disconnections: 10 summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 0 testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_1 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_2 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery,nodes_init=5,nodes_failover=2,step_count=1,recovery_type=delta,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set1' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '2', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'step_count': '1', 'recovery_type': 'delta', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_2', 'nodes_init': '5', 'GROUP': 'failover_set1', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 2, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_hard_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:23:22,434 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #2 test_data_load_collections_with_hard_failover_recovery ========= 2022-02-25 08:23:22,815 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #2 test_data_load_collections_with_hard_failover_recovery ========= 2022-02-25 08:23:52,959 | infra | ERROR | MainThread | [Rest_Connection:_http_request:283] DELETE http://172.23.105.164:8091/pools/default/buckets/default body: headers: {'Accept': '*/*', 'Connection': 'close', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Content-Type': 'application/x-www-form-urlencoded'} error: 500 reason: unknown {"_":"Bucket deletion not yet complete, but will continue.\r\n"} auth: Administrator:password 2022-02-25 08:23:53,345 | infra | ERROR | MainThread | [remote_util:log_command_output:3101] wc: /data: Is a directory 2022-02-25 08:23:53,345 | test | ERROR | MainThread | [bucket_ready_functions:delete_bucket:1612] Unable to get timings for bucket: global name 'StatsCommon' is not defined 2022-02-25 08:24:53,900 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'ae3690fd6e1a0b9ebb246fb2da942176', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2eaa88d67df6159dfed47862cca1d327', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:24:53,923 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:24:53,923 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293530L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.530Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = b2df597dd70b9b6896b7c99f4a0957b6'} 2022-02-25 08:24:53,924 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293529L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.529Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:24:53,924 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806233526L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:53.526Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = b2df597dd70b9b6896b7c99f4a0957b6"} 2022-02-25 08:24:53,924 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806233519L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:23:53.519Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:24:53,924 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.100.34', u'tstamp': 1645806203134L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.134Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.100.34\' for deletion'} 2022-02-25 08:24:53,924 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.100.35', u'tstamp': 1645806203131L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.131Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.100.35\' for deletion'} 2022-02-25 08:24:53,926 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806203072L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.072Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.105.164\' for deletion'} 2022-02-25 08:24:53,926 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.105.206', u'tstamp': 1645806203059L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.059Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.105.206\' for deletion'} 2022-02-25 08:24:53,926 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.106.177', u'tstamp': 1645806203057L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.057Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.106.177\' for deletion'} 2022-02-25 08:24:53,927 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645805525820L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:12:05.820Z', u'text': u'Rebalance completed successfully.\nRebalance Operation Id = eb815e4525d298a675390ab99eee5c85'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'ae3690fd6e1a0b9ebb246fb2da942176', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2eaa88d67df6159dfed47862cca1d327', u'status': u'notRunning'} - rebalance failed ERROR ====================================================================== ERROR: test_data_load_collections_with_hard_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() During the test, Remote Connections: 9, Disconnections: 9 SDK Connections: 0, Disconnections: 0 File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'ae3690fd6e1a0b9ebb246fb2da942176', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2eaa88d67df6159dfed47862cca1d327', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 94.474s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 1 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_2 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_3 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery,nodes_init=5,nodes_failover=2,step_count=1,recovery_type=full,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set0' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '2', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'step_count': '1', 'recovery_type': 'full', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_3', 'nodes_init': '5', 'GROUP': 'failover_set0', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 3, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_graceful_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:24:56,660 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #3 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 08:24:57,033 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #3 test_data_load_collections_with_graceful_failover_recovery ========= 2022-02-25 08:25:57,743 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'e0bea793cab7f63af8dddbdbad5e36b7', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=9d783bedd5905373adf9e529b8f78fa0', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:25:57,756 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:25:57,756 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357383L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.383Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3efebf2fff2257be2fb5c79375e55b3a'} 2022-02-25 08:25:57,757 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357382L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.382Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:25:57,757 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806297379L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:57.379Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 3efebf2fff2257be2fb5c79375e55b3a"} 2022-02-25 08:25:57,757 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806297372L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:24:57.372Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293530L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.530Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = b2df597dd70b9b6896b7c99f4a0957b6'} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293529L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.529Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806233526L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:53.526Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = b2df597dd70b9b6896b7c99f4a0957b6"} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806233519L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:23:53.519Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.100.34', u'tstamp': 1645806203134L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.134Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.100.34\' for deletion'} 2022-02-25 08:25:57,759 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_memcached', u'type': u'info', u'node': u'ns_1@172.23.100.35', u'tstamp': 1645806203131L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:23:23.131Z', u'text': u'Shutting down bucket "default" on \'ns_1@172.23.100.35\' for deletion'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'e0bea793cab7f63af8dddbdbad5e36b7', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=9d783bedd5905373adf9e529b8f78fa0', u'status': u'notRunning'} - rebalance failed ERROR ====================================================================== ERROR: test_data_load_collections_with_graceful_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'e0bea793cab7f63af8dddbdbad5e36b7', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=9d783bedd5905373adf9e529b8f78fa0', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.818s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 2 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_3 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_4 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery,nodes_init=5,nodes_failover=2,step_count=1,recovery_type=full,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set1' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '2', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'step_count': '1', 'recovery_type': 'full', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_4', 'nodes_init': '5', 'GROUP': 'failover_set1', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 4, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_hard_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:26:00,513 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #4 test_data_load_collections_with_hard_failover_recovery ========= 2022-02-25 08:26:00,884 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #4 test_data_load_collections_with_hard_failover_recovery ========= 2022-02-25 08:27:01,437 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'b11f878a8a8010d638ce5a203f026ed4', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4ca01bb0777cef766dd5d3bcf1f903f8', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:27:01,450 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:27:01,451 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.079Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 912df8ef00ea0a3a5c7ff4ac7353be87'} 2022-02-25 08:27:01,451 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421078L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.078Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:27:01,451 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806361075L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:26:01.075Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 912df8ef00ea0a3a5c7ff4ac7353be87"} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806361068L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:26:01.068Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357383L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.383Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3efebf2fff2257be2fb5c79375e55b3a'} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357382L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.382Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806297379L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:57.379Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 3efebf2fff2257be2fb5c79375e55b3a"} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806297372L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:24:57.372Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293530L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.530Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = b2df597dd70b9b6896b7c99f4a0957b6'} 2022-02-25 08:27:01,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806293529L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:24:53.529Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'b11f878a8a8010d638ce5a203f026ed4', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4ca01bb0777cef766dd5d3bcf1f903f8', u'status': u'notRunning'} - rebalance failed ERROR During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ====================================================================== ERROR: test_data_load_collections_with_hard_failover_recovery (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'b11f878a8a8010d638ce5a203f026ed4', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4ca01bb0777cef766dd5d3bcf1f903f8', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.680s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 3 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_4 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_5 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out,nodes_init=6,nodes_failover=1,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set2' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '1', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_5', 'nodes_init': '6', 'GROUP': 'failover_set2', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 5, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_graceful_failover_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:27:04,164 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #5 test_data_load_collections_with_graceful_failover_rebalance_out ========= 2022-02-25 08:27:04,543 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #5 test_data_load_collections_with_graceful_failover_rebalance_out ========= 2022-02-25 08:28:05,095 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'0e441bbaec1b191e2961c3cf6e14d901', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=f74c05156c243dbef91e584b715f054c', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:28:05,109 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:28:05,109 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484735L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.735Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = bd2448a7d8aad9f4dc6a2f9dffe9a81f'} 2022-02-25 08:28:05,109 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484734L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.734Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806424732L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:04.732Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = bd2448a7d8aad9f4dc6a2f9dffe9a81f"} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806424724L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:27:04.724Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.079Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 912df8ef00ea0a3a5c7ff4ac7353be87'} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421078L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.078Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806361075L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:26:01.075Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 912df8ef00ea0a3a5c7ff4ac7353be87"} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806361068L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:26:01.068Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:28:05,111 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357383L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.383Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3efebf2fff2257be2fb5c79375e55b3a'} 2022-02-25 08:28:05,112 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806357382L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:25:57.382Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'0e441bbaec1b191e2961c3cf6e14d901', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=f74c05156c243dbef91e584b715f054c', u'status': u'notRunning'} - rebalance failed ERROR ====================================================================== ERROR: test_data_load_collections_with_graceful_failover_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'0e441bbaec1b191e2961c3cf6e14d901', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=f74c05156c243dbef91e584b715f054c', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.645s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 4 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_5 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_6 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out,nodes_init=6,nodes_failover=1,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=failover_set2' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'nodes_failover': '1', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_6', 'nodes_init': '6', 'GROUP': 'failover_set2', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 6, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_hard_failover_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:28:07,848 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #6 test_data_load_collections_with_hard_failover_rebalance_out ========= 2022-02-25 08:28:08,220 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #6 test_data_load_collections_with_hard_failover_rebalance_out ========= 2022-02-25 08:29:08,779 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'eca122912433cb4f0a46c8826b40c478', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=bafe7c500c1a834b7be4c8fa35e55403', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:29:08,793 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:29:08,795 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548420L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.420Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 178904e1624b9e21d440ae88e8fae824'} 2022-02-25 08:29:08,795 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548419L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.419Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:29:08,795 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806488415L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:08.415Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 178904e1624b9e21d440ae88e8fae824"} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806488408L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:28:08.408Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484735L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.735Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = bd2448a7d8aad9f4dc6a2f9dffe9a81f'} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484734L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.734Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806424732L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:04.732Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = bd2448a7d8aad9f4dc6a2f9dffe9a81f"} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806424724L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:27:04.724Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.079Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 912df8ef00ea0a3a5c7ff4ac7353be87'} 2022-02-25 08:29:08,796 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806421078L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:27:01.078Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'eca122912433cb4f0a46c8826b40c478', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=bafe7c500c1a834b7be4c8fa35e55403', u'status': u'notRunning'} - rebalance failed During the test, ERROR Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ====================================================================== ERROR: test_data_load_collections_with_hard_failover_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'eca122912433cb4f0a46c8826b40c478', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=bafe7c500c1a834b7be4c8fa35e55403', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.669s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 5 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_6 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_7 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in,nodes_init=5,nodes_in=2,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set0' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'nodes_in': '2', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_7', 'nodes_init': '5', 'GROUP': 'rebalance_set0', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 7, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_rebalance_in (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:29:11,515 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #7 test_data_load_collections_with_rebalance_in ========= 2022-02-25 08:29:11,884 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #7 test_data_load_collections_with_rebalance_in ========= 2022-02-25 08:30:12,437 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'82ecc955bd2005c79d84f92cd4ce77d9', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4f531f5bae66a761a0064c6033aedc8f', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:30:12,451 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612080L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.080Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3be37420d889b3469ef46c21021f9f75'} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.079Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806552075L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:12.075Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 3be37420d889b3469ef46c21021f9f75"} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806552068L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:29:12.068Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548420L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.420Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 178904e1624b9e21d440ae88e8fae824'} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548419L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.419Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:30:12,453 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806488415L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:08.415Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 178904e1624b9e21d440ae88e8fae824"} 2022-02-25 08:30:12,454 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806488408L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:28:08.408Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:30:12,454 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484735L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.735Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = bd2448a7d8aad9f4dc6a2f9dffe9a81f'} 2022-02-25 08:30:12,454 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806484734L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:28:04.734Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'82ecc955bd2005c79d84f92cd4ce77d9', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4f531f5bae66a761a0064c6033aedc8f', u'status': u'notRunning'} - rebalance failed ERROR ====================================================================== ERROR: test_data_load_collections_with_rebalance_in (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'82ecc955bd2005c79d84f92cd4ce77d9', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=4f531f5bae66a761a0064c6033aedc8f', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.645s FAILED (errors=1) During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 6 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_7 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_8 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out,nodes_init=6,nodes_out=1,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set0' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_8', 'nodes_init': '6', 'GROUP': 'rebalance_set0', 'nodes_out': '1', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 8, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:30:15,190 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #8 test_data_load_collections_with_rebalance_out ========= 2022-02-25 08:30:15,555 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #8 test_data_load_collections_with_rebalance_out ========= 2022-02-25 08:31:16,101 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'4291555ec9358d8fcde2f4935ed9facb', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=fb016ea7138900dfea27f76971851896', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:31:16,117 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675745L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.745Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 6dc38198f1f4e3dcad9cd8a4e8a7471d'} 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675744L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.744Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806615742L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:15.742Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 6dc38198f1f4e3dcad9cd8a4e8a7471d"} 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806615735L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:30:15.735Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612080L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.080Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3be37420d889b3469ef46c21021f9f75'} 2022-02-25 08:31:16,118 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.079Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:31:16,119 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806552075L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:12.075Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 3be37420d889b3469ef46c21021f9f75"} 2022-02-25 08:31:16,119 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806552068L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:29:12.068Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:31:16,119 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548420L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.420Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 178904e1624b9e21d440ae88e8fae824'} 2022-02-25 08:31:16,119 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806548419L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:29:08.419Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'4291555ec9358d8fcde2f4935ed9facb', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=fb016ea7138900dfea27f76971851896', u'status': u'notRunning'} - rebalance failed ERROR During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ====================================================================== ERROR: test_data_load_collections_with_rebalance_out (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'4291555ec9358d8fcde2f4935ed9facb', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=fb016ea7138900dfea27f76971851896', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.650s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 7 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_8 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_9 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_swap_rebalance,nodes_init=5,nodes_swap=1,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set1' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_9', 'nodes_init': '5', 'GROUP': 'rebalance_set1', 'nodes_swap': '1', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 9, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_swap_rebalance (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:31:18,881 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #9 test_data_load_collections_with_swap_rebalance ========= 2022-02-25 08:31:19,266 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #9 test_data_load_collections_with_swap_rebalance ========= 2022-02-25 08:32:19,836 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'91193764da2e9029c141b3e0a962c7f0', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5957938b7e1466cc6923bba8cd80ce0b', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:32:19,852 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:32:19,852 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739467L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.467Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 0fb1e4a00696b3a10c7368ac66f9a2f1'} 2022-02-25 08:32:19,852 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739466L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.466Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:32:19,854 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806679464L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:19.464Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 0fb1e4a00696b3a10c7368ac66f9a2f1"} 2022-02-25 08:32:19,854 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806679456L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:31:19.456Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:32:19,854 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675745L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.745Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 6dc38198f1f4e3dcad9cd8a4e8a7471d'} 2022-02-25 08:32:19,854 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675744L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.744Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:32:19,855 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806615742L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:15.742Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 6dc38198f1f4e3dcad9cd8a4e8a7471d"} 2022-02-25 08:32:19,855 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806615735L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:30:15.735Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:32:19,855 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612080L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.080Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 3be37420d889b3469ef46c21021f9f75'} 2022-02-25 08:32:19,855 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806612079L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:30:12.079Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'91193764da2e9029c141b3e0a962c7f0', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5957938b7e1466cc6923bba8cd80ce0b', u'status': u'notRunning'} - rebalance failed During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ERROR ====================================================================== ERROR: test_data_load_collections_with_swap_rebalance (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'91193764da2e9029c141b3e0a962c7f0', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5957938b7e1466cc6923bba8cd80ce0b', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.719s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 8 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_swap_rebalance testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_9 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_10 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out,nodes_init=6,nodes_in=1,nodes_out=2,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set1' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'nodes_in': '1', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_10', 'nodes_init': '6', 'GROUP': 'rebalance_set1', 'nodes_out': '2', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 10, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_rebalance_in_out (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:32:22,555 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #10 test_data_load_collections_with_rebalance_in_out ========= 2022-02-25 08:32:22,953 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #10 test_data_load_collections_with_rebalance_in_out ========= 2022-02-25 08:33:23,506 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'f933cee23b4b4bdc6dee2267c1cbfdac', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=c4289a9286d1d6e5f68d8cd63f5e0aa6', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:33:23,523 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803148L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.148Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 1191e8f1be3e9ce509e504084ac860aa'} 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803147L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.147Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806743144L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:23.144Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 1191e8f1be3e9ce509e504084ac860aa"} 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806743137L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:32:23.137Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739467L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.467Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 0fb1e4a00696b3a10c7368ac66f9a2f1'} 2022-02-25 08:33:23,523 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739466L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.466Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:33:23,525 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806679464L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:19.464Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 0fb1e4a00696b3a10c7368ac66f9a2f1"} 2022-02-25 08:33:23,525 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806679456L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:31:19.456Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:33:23,525 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675745L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.745Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 6dc38198f1f4e3dcad9cd8a4e8a7471d'} 2022-02-25 08:33:23,525 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806675744L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:31:15.744Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'f933cee23b4b4bdc6dee2267c1cbfdac', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=c4289a9286d1d6e5f68d8cd63f5e0aa6', u'status': u'notRunning'} - rebalance failed ERROR During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ====================================================================== ERROR: test_data_load_collections_with_rebalance_in_out (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'f933cee23b4b4bdc6dee2267c1cbfdac', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=c4289a9286d1d6e5f68d8cd63f5e0aa6', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.656s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 9 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_swap_rebalance bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_10 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_11 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out,nodes_init=5,nodes_in=2,nodes_out=1,bucket_spec=magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set1' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'nodes_in': '2', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_11', 'nodes_init': '5', 'GROUP': 'rebalance_set1', 'nodes_out': '1', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_3_replica_magma_768_single_bucket', 'case_number': 11, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_rebalance_in_out (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:33:26,244 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #11 test_data_load_collections_with_rebalance_in_out ========= 2022-02-25 08:33:26,615 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #11 test_data_load_collections_with_rebalance_in_out ========= 2022-02-25 08:34:27,306 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'21d038c95f64a6e70f94114d1113a262', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2fb15e805d1ffa2865afef89d29df07e', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:34:27,322 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:34:27,322 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806866953L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:34:26.953Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = b6d87c2e8126dedbdfc66af135d5edca'} 2022-02-25 08:34:27,322 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806866952L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:34:26.952Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:34:27,325 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806806950L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:26.950Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = b6d87c2e8126dedbdfc66af135d5edca"} 2022-02-25 08:34:27,325 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806806943L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:33:26.943Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:34:27,325 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803148L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.148Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 1191e8f1be3e9ce509e504084ac860aa'} 2022-02-25 08:34:27,325 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803147L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.147Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:34:27,326 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806743144L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:23.144Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 1191e8f1be3e9ce509e504084ac860aa"} 2022-02-25 08:34:27,326 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806743137L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:32:23.137Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:34:27,326 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739467L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.467Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 0fb1e4a00696b3a10c7368ac66f9a2f1'} 2022-02-25 08:34:27,328 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806739466L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:19.466Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'21d038c95f64a6e70f94114d1113a262', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2fb15e805d1ffa2865afef89d29df07e', u'status': u'notRunning'} - rebalance failed During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ERROR ====================================================================== ERROR: test_data_load_collections_with_rebalance_in_out (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'21d038c95f64a6e70f94114d1113a262', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=2fb15e805d1ffa2865afef89d29df07e', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.786s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 10 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_swap_rebalance bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_11 Logs will be stored at /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_12 guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/win10-bucket-ops-temp_rebalance_magma.ini rerun=False,get-cbcollect-info=True -t bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in,nodes_init=5,nodes_in=2,update_replica=True,updated_num_replicas=3,bucket_spec=magma_dgm.1_percent_dgm.5_node_2_replica_magma_768_single_bucket,doc_size=768,randomize_value=True,data_load_stage=during,skip_validations=False,data_load_spec=volume_test_load_1_percent_dgm,GROUP=rebalance_set2' Test Input params: {'doc_size': '768', 'data_load_stage': 'during', 'cluster_name': 'win10-bucket-ops-temp_rebalance_magma', 'ini': '/tmp/win10-bucket-ops-temp_rebalance_magma.ini', 'get-cbcollect-info': 'True', 'conf_file': 'conf/magma/dgm_collections_1_percent_dgm.conf', 'nodes_in': '2', 'update_replica': 'True', 'skip_validations': 'False', 'spec': 'dgm_collections_1_percent_dgm', 'rerun': 'False', 'num_nodes': 7, 'logs_folder': '/data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_12', 'nodes_init': '5', 'updated_num_replicas': '3', 'GROUP': 'rebalance_set2', 'bucket_spec': 'magma_dgm.1_percent_dgm.5_node_2_replica_magma_768_single_bucket', 'case_number': 12, 'randomize_value': 'True', 'data_load_spec': 'volume_test_load_1_percent_dgm'} test_data_load_collections_with_rebalance_in (bucket_collections.collections_rebalance.CollectionsRebalance) ... 2022-02-25 08:34:30,069 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= CollectionsRebalance setup started for test #12 test_data_load_collections_with_rebalance_in ========= 2022-02-25 08:34:30,450 | test | INFO | MainThread | [basetestcase:log_setup_status:657] ========= BaseTestCase setup started for test #12 test_data_load_collections_with_rebalance_in ========= 2022-02-25 08:35:30,996 | test | ERROR | MainThread | [rest_client:_rebalance_status_and_progress:1639] {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'95d373d408e846bca04756cdb464248d', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5fa7f7caab9a845f10ffc6f132d0e3b4', u'status': u'notRunning'} - rebalance failed 2022-02-25 08:35:31,012 | test | INFO | MainThread | [rest_client:print_UI_logs:2788] Latest logs from UI on 172.23.105.164: 2022-02-25 08:35:31,012 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806930642L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:35:30.642Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 2c4db8b198421377d915888fe5855d96'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806879195L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:34:39.195Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\'] (repeated 1 times, last seen 8.563196 secs ago)'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806870638L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:34:30.638Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 2c4db8b198421377d915888fe5855d96"} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806866953L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:34:26.953Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = b6d87c2e8126dedbdfc66af135d5edca'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806866952L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:34:26.952Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806806950L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:26.950Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = b6d87c2e8126dedbdfc66af135d5edca"} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 102, u'module': u'menelaus_web', u'type': u'warning', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806806943L, u'shortText': u'client-side error report', u'serverTime': u'2022-02-25T08:33:26.943Z', u'text': u'Client-side error-report for user "Administrator" on node \'ns_1@172.23.105.164\':\nUser-Agent:Python-httplib2/$Rev: 259 $\nStarting rebalance from test, ejected nodes [u\'ns_1@172.23.100.34\', u\'ns_1@172.23.105.206\', u\'ns_1@172.23.106.177\', u\'ns_1@172.23.100.35\']'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803148L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.148Z', u'text': u'Rebalance exited with reason {buckets_shutdown_wait_failed,\n [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]}.\nRebalance Operation Id = 1191e8f1be3e9ce509e504084ac860aa'} 2022-02-25 08:35:31,013 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_rebalancer', u'type': u'critical', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806803147L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:33:23.147Z', u'text': u'Failed to wait deletion of some buckets on some nodes: [{\'ns_1@172.23.105.164\',\n {\'EXIT\',\n {old_buckets_shutdown_wait_failed,\n ["default"]}}}]\n'} 2022-02-25 08:35:31,015 | test | ERROR | MainThread | [rest_client:print_UI_logs:2790] {u'code': 0, u'module': u'ns_orchestrator', u'type': u'info', u'node': u'ns_1@172.23.105.164', u'tstamp': 1645806743144L, u'shortText': u'message', u'serverTime': u'2022-02-25T08:32:23.144Z', u'text': u"Starting rebalance, KeepNodes = ['ns_1@172.23.105.164'], EjectNodes = ['ns_1@172.23.100.34',\n 'ns_1@172.23.105.206',\n 'ns_1@172.23.106.177',\n 'ns_1@172.23.100.35'], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 1191e8f1be3e9ce509e504084ac860aa"} Traceback (most recent call last): File "pytests/basetestcase.py", line 327, in setUp self.cluster_util.cluster_cleanup(cluster, File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 169, in cluster_cleanup self.cleanup_cluster(cluster, master=cluster.master) File "couchbase_utils/cluster_utils/cluster_ready_functions.py", line 209, in cleanup_cluster ejectedNodes=[node.id for node in nodes File "lib/membase/api/rest_client.py", line 141, in remove_nodes return self.rest.monitorRebalance() File "lib/membase/api/rest_client.py", line 1545, in monitorRebalance progress = self._rebalance_progress() File "lib/membase/api/rest_client.py", line 1668, in _rebalance_progress return self._rebalance_status_and_progress()[1] File "lib/membase/api/rest_client.py", line 1641, in _rebalance_status_and_progress raise RebalanceFailedException(msg) RebalanceFailedException: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'95d373d408e846bca04756cdb464248d', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5fa7f7caab9a845f10ffc6f132d0e3b4', u'status': u'notRunning'} - rebalance failed During the test, Remote Connections: 8, Disconnections: 8 SDK Connections: 0, Disconnections: 0 ERROR ====================================================================== ERROR: test_data_load_collections_with_rebalance_in (bucket_collections.collections_rebalance.CollectionsRebalance) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/bucket_collections/collections_rebalance.py", line 22, in setUp super(CollectionsRebalance, self).setUp() File "pytests/bucket_collections/collections_base.py", line 19, in setUp super(CollectionBase, self).setUp() File "pytests/basetestcase.py", line 1049, in setUp super(ClusterSetup, self).setUp() File "pytests/basetestcase.py", line 410, in setUp self.fail(e) AssertionError: Rebalance Failed: {u'errorMessage': u'Rebalance failed. See logs for detailed reason. You can try again.', u'type': u'rebalance', u'masterRequestTimedOut': False, u'statusId': u'95d373d408e846bca04756cdb464248d', u'subtype': u'rebalance', u'statusIsStale': False, u'lastReportURI': u'/logs/rebalanceReport?reportID=5fa7f7caab9a845f10ffc6f132d0e3b4', u'status': u'notRunning'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 63.671s FAILED (errors=1) summary so far suite bucket_collections.collections_rebalance.CollectionsRebalance , pass 1 , fail 11 failures so far... bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_recovery bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_graceful_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_hard_failover_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_swap_rebalance bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in_out bucket_collections.collections_rebalance.CollectionsRebalance.test_data_load_collections_with_rebalance_in testrunner logs, diags and results are available under /data/workspace/temp_rebalance_magma/logs/testrunner-22-Feb-25_06-16-53/test_12 Deprecated Gradle features were used in this build, making it incompatible with Gradle 7.0. Use '--warning-mode all' to show the individual deprecation warnings. See https://docs.gradle.org/6.2.2/userguide/command_line_interface.html#sec:command_line_warnings BUILD SUCCESSFUL in 2h 18m 54s 2 actionable tasks: 2 executed [temp_rebalance_magma] $ /bin/sh -xe /tmp/jenkins3713355641547475369.sh Archiving artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE [description-setter] Description set: 1.1.8 -P client_version=3.1.6' Finished: UNSTABLE