Started by remote host 172.23.105.174 [EnvInject] - Loading node environment variables. Building remotely on slv-s62101 (P0 upgrade) in workspace /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test [WS-CLEANUP] Deleting project workspace... Running Prebuild steps [debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test] $ /bin/sh -xe /tmp/jenkins7779008770441259000.sh ++ echo plasma-plasma-collections-sharding-simple-test-Feb-01-19:45:33-7.6.0-2090 ++ awk '{split($0,r,"-");print r[1],r[2]}' + desc='plasma plasma' + echo Desc: 7.6.0-2090 - plasma plasma - debian Desc: 7.6.0-2090 - plasma plasma - debian + echo newState=available + newState=available Success build forhudson.tasks.Shell@32c2140e [description-setter] Description set: 7.6.0-2090 - plasma plasma - debian Success build forhudson.plugins.descriptionsetter.DescriptionSetterBuilder@336be394 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'propfile' [EnvInject] - Variables injected successfully. Success build fororg.jenkinsci.plugins.envinject.EnvInjectBuilder@1c7b250d Cloning the remote Git repository Cloning repository https://github.com/couchbase/testrunner > /usr/bin/git init /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test # timeout=10 Fetching upstream changes from https://github.com/couchbase/testrunner > /usr/bin/git --version # timeout=10 > /usr/bin/git fetch --tags --progress https://github.com/couchbase/testrunner +refs/heads/*:refs/remotes/origin/* # timeout=30 > /usr/bin/git config remote.origin.url https://github.com/couchbase/testrunner # timeout=10 > /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config remote.origin.url https://github.com/couchbase/testrunner # timeout=10 Fetching upstream changes from https://github.com/couchbase/testrunner > /usr/bin/git fetch --tags --progress https://github.com/couchbase/testrunner +refs/heads/*:refs/remotes/origin/* # timeout=30 > /usr/bin/git rev-parse origin/trinity^{commit} # timeout=10 Checking out Revision 88345daad7b8860acdbe00215765c3cd6457a1a1 (origin/trinity) > /usr/bin/git config core.sparsecheckout # timeout=10 > /usr/bin/git checkout -f 88345daad7b8860acdbe00215765c3cd6457a1a1 > /usr/bin/git rev-list 88345daad7b8860acdbe00215765c3cd6457a1a1 # timeout=10 > /usr/bin/git tag -a -f -m Jenkins Build #672798 jenkins-test_suite_executor-672798 # timeout=10 No emails were triggered. [debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test] $ /bin/sh -xe /tmp/jenkins8566620057171219321.sh + echo Desc: plasma-plasma-collections-sharding-simple-test-Feb-01-19:45:33-7.6.0-2090 Desc: plasma-plasma-collections-sharding-simple-test-Feb-01-19:45:33-7.6.0-2090 [description-setter] Could not determine description. [debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test] $ /bin/bash /tmp/jenkins1030731859838898846.sh python3 scripts/rerun_jobs.py 7.6.0-2090 --executor_jenkins_job --manual_run This is the first run for this build. [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'rerun_props_file' [EnvInject] - Variables injected successfully. [debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test] $ /bin/bash /tmp/jenkins6618222305121573404.sh Set ALLOW_HTP to False so test could run. Submodule 'java_sdk_client' (https://github.com/couchbaselabs/java_sdk_client) registered for path 'java_sdk_client' Submodule 'lib/capellaAPI' (https://github.com/couchbaselabs/CapellaRESTAPIs) registered for path 'lib/capellaAPI' Submodule 'magma_loader/DocLoader' (https://github.com/couchbaselabs/DocLoader.git) registered for path 'magma_loader/DocLoader' Cloning into 'java_sdk_client'... Submodule path 'java_sdk_client': checked out 'de89b059ce28567dbac18afb032271a4eaa674ff' Cloning into 'lib/capellaAPI'... Submodule path 'lib/capellaAPI': checked out '9daae78719a7e4e5889ea9553e5014e666870f84' Cloning into 'magma_loader/DocLoader'... Submodule path 'magma_loader/DocLoader': checked out '0f5f758a9a89ecb5bc4ac20e5d4a15c704ec89f7' en_US.UTF-8 the major release is 7 "172.23.123.160","172.23.123.207","172.23.123.206","172.23.123.157" Searching for httplib2 Best match: httplib2 0.17.0 Adding httplib2 0.17.0 to easy-install.pth file Using /usr/local/lib/python3.7/site-packages Processing dependencies for httplib2 Finished processing dependencies for httplib2 Searching for ground Best match: ground 8.2.0 Processing ground-8.2.0-py3.7.egg ground 8.2.0 is already the active version in easy-install.pth Using /usr/local/lib/python3.7/site-packages/ground-8.2.0-py3.7.egg Processing dependencies for ground Finished processing dependencies for ground Searching for hypothesis_geometry Best match: hypothesis-geometry 7.3.0 Processing hypothesis_geometry-7.3.0-py3.7.egg hypothesis-geometry 7.3.0 is already the active version in easy-install.pth Using /usr/local/lib/python3.7/site-packages/hypothesis_geometry-7.3.0-py3.7.egg Processing dependencies for hypothesis_geometry Finished processing dependencies for hypothesis_geometry Searching for argparse Best match: argparse 1.4.0 Processing argparse-1.4.0-py3.7.egg argparse 1.4.0 is already the active version in easy-install.pth Using /usr/local/lib/python3.7/site-packages/argparse-1.4.0-py3.7.egg Processing dependencies for argparse Finished processing dependencies for argparse Searching for psycopg2 Reading https://pypi.org/simple/psycopg2/ Downloading https://files.pythonhosted.org/packages/c9/5e/dc6acaf46d78979d6b03458b7a1618a68e152a6776fce95daac5e0f0301b/psycopg2-2.9.9.tar.gz#sha256=d1454bde93fb1e224166811694d600e746430c006fbb031ea06ecc2ea41bf156 Best match: psycopg2 2.9.9 Processing psycopg2-2.9.9.tar.gz Writing /tmp/easy_install-dlbqk4ri/psycopg2-2.9.9/setup.cfg Running psycopg2-2.9.9/setup.py -q bdist_egg --dist-dir /tmp/easy_install-dlbqk4ri/psycopg2-2.9.9/egg-dist-tmp-t8b13an4 Error: pg_config executable not found. pg_config is required to build psycopg2 from source. Please add the directory containing pg_config to the $PATH or specify the full executable path with the option: python setup.py build_ext --pg-config /path/to/pg_config build ... or with the pg_config option in 'setup.cfg'. If you prefer to avoid building psycopg2 from source, please install the PyPI 'psycopg2-binary' package instead. For further information please check the 'doc/src/install.rst' file (also at ). error: Setup script exited with 1 centos Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, : subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * base: ix-denver.mm.fcix.net * extras: mirrors.raystedman.org * updates: ix-denver.mm.fcix.net Package 2:docker-1.13.1-209.git7d71120.el7.centos.x86_64 already installed and latest version Nothing to do Using default tag: latest Trying to pull repository docker.io/jamesdbloom/mockserver ... latest: Pulling from docker.io/jamesdbloom/mockserver Digest: sha256:bc41010b9e920f5e1b75a226e8eda05eb4eb9843dad042e9e3950d1b7de823f4 Status: Image is up to date for docker.io/jamesdbloom/mockserver:latest [global] port:8091 username:root password:couchbase index_port:9102 n1ql_port:8903 index_path:/data [servers] 1:vm1 2:vm2 3:vm3 4:vm4 [vm1] ip:dynamic services:n1ql,kv,index [vm2] ip:dynamic services:kv,index,n1ql [vm3] ip:dynamic services:kv,index,n1ql [vm4] ip:dynamic services:kv,index,n1ql [membase] rest_username:Administrator rest_password:passwordpython3 scripts/populateIni.py -s "172.23.123.160","172.23.123.207","172.23.123.206","172.23.123.157" -d None -a None -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec_reformat.25952.ini -p debian -o /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -k {} INFO:root:SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.123.157 as root INFO:root:SSH Connected to 172.23.123.207 as root INFO:root:SSH Connected to 172.23.123.160 as root INFO:root:SSH Connected to 172.23.123.206 as root INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.157: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.207: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.206: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:command executed with root but got an error ['sh: 1: [[: not found'] ... INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.160: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:command executed with root but got an error ['sh: 1: [[: not found'] ... INFO:root:command executed with root but got an error ['sh: 1: [[: not found'] ... INFO:root:command executed with root but got an error ['sh: 1: [[: not found'] ... in main the ini file is /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec_reformat.25952.ini the given server info is "172.23.123.160","172.23.123.207","172.23.123.206","172.23.123.157" Collecting memory info from 172.23.123.157 Collecting memory info from 172.23.123.207 Collecting memory info from 172.23.123.206 sh: 1: [[: not found Collecting memory info from 172.23.123.160 sh: 1: [[: not found sh: 1: [[: not found sh: 1: [[: not found the servers memory info is [('172.23.123.207', 16355384), ('172.23.123.206', 16355384), ('172.23.123.157', 16355388), ('172.23.123.160', 16355388)] [global] port:8091 username:root password:couchbase index_port:9102 n1ql_port:8903 index_path:/data [servers] 1:vm1 2:vm2 3:vm3 4:vm4 [vm1] ip:172.23.123.207 services:n1ql,kv,index [vm2] ip:172.23.123.206 services:kv,index,n1ql [vm3] ip:172.23.123.157 services:kv,index,n1ql [vm4] ip:172.23.123.160 services:kv,index,n1ql [membase] rest_username:Administrator rest_password:password extra install is Local time: Thu 2024-02-01 19:47:04 PST Universal time: Fri 2024-02-02 03:47:04 UTC RTC time: Fri 2024-02-02 03:47:03 Time zone: US/Pacific (PST, -0800) NTP enabled: no NTP synchronized: yes RTC in local TZ: no DST active: no Last DST change: DST ended at Sun 2023-11-05 01:59:59 PDT Sun 2023-11-05 01:00:00 PST Next DST change: DST begins (the clock jumps one hour forward) at Sun 2024-03-10 01:59:59 PST Sun 2024-03-10 03:00:00 PDT python3 scripts/ssh.py -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec_root.25952.ini iptables -F INFO:root:SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.123.207 as root INFO:root:SSH Connected to 172.23.123.157 as root INFO:root:SSH Connected to 172.23.123.206 as root INFO:root:SSH Connected to 172.23.123.160 as root INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.160: iptables -F INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.207: iptables -F INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.157: iptables -F INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:running command.raw on 172.23.123.206: iptables -F INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... 172.23.123.160 bash: line 1: iptables: command not found 172.23.123.207 bash: line 1: iptables: command not found 172.23.123.157 bash: line 1: iptables: command not found 172.23.123.206 bash: line 1: iptables: command not found Initial version: python3 scripts/new_install.py -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p timeout=1800,skip_local_download=False,get-cbcollect-info=True,version=7.6.0-2090,product=cb,debug_logs=True,ntp=True,url=,cb_non_package_installer_url= 2024-02-01 19:47:08,448 - root - WARNING - URL: is not valid, will use version to locate build 2024-02-01 19:47:08,448 - root - WARNING - URL: is not valid, will use default url to locate installer 2024-02-01 19:47:08,451 - root - INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:47:08,592 - root - INFO - SSH Connected to 172.23.123.207 as root 2024-02-01 19:47:08,731 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:08,999 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:09,002 - root - INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:47:09,141 - root - INFO - SSH Connected to 172.23.123.206 as root 2024-02-01 19:47:09,280 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:09,561 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:09,567 - root - INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:47:09,710 - root - INFO - SSH Connected to 172.23.123.157 as root 2024-02-01 19:47:09,847 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:10,159 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:10,169 - root - INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:47:10,343 - root - INFO - SSH Connected to 172.23.123.160 as root 2024-02-01 19:47:10,483 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:10,756 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:10,761 - root - INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:47:10,901 - root - INFO - SSH Connected to 172.23.123.207 as root 2024-02-01 19:47:11,041 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:11,353 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:11,359 - root - INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:47:11,501 - root - INFO - SSH Connected to 172.23.123.206 as root 2024-02-01 19:47:11,642 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:11,950 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:11,958 - root - INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:47:12,130 - root - INFO - SSH Connected to 172.23.123.157 as root 2024-02-01 19:47:12,390 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:12,706 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:12,714 - root - INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:47:12,851 - root - INFO - SSH Connected to 172.23.123.160 as root 2024-02-01 19:47:12,992 - root - INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:47:13,265 - root - INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:47:13,267 - root - INFO - Check if ntp is installed 2024-02-01 19:47:13,268 - root - INFO - This OS version debian 11 2024-02-01 19:47:13,268 - root - INFO - will add install in other os later, no set do install 2024-02-01 19:47:13,270 - root - INFO - Check if ntp is installed 2024-02-01 19:47:13,271 - root - INFO - This OS version debian 11 2024-02-01 19:47:13,271 - root - INFO - will add install in other os later, no set do install 2024-02-01 19:47:13,273 - root - INFO - Check if ntp is installed 2024-02-01 19:47:13,273 - root - INFO - This OS version debian 11 2024-02-01 19:47:13,273 - root - INFO - will add install in other os later, no set do install 2024-02-01 19:47:13,277 - root - INFO - Check if ntp is installed 2024-02-01 19:47:13,278 - root - INFO - This OS version debian 11 2024-02-01 19:47:13,278 - root - INFO - will add install in other os later, no set do install 2024-02-01 19:47:13,286 - root - INFO - ['Thu 01 Feb 2024 07:47:13 PM PST'] IP: 172.23.123.206 2024-02-01 19:47:13,294 - root - INFO - ['Thu 01 Feb 2024 07:47:13 PM PST'] IP: 172.23.123.157 2024-02-01 19:47:13,295 - root - INFO - ['Thu 01 Feb 2024 07:47:13 PM PST'] IP: 172.23.123.207 2024-02-01 19:47:13,315 - root - INFO - ['Thu 01 Feb 2024 07:47:13 PM PST'] IP: 172.23.123.160 2024-02-01 19:47:13,315 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:47:13,324 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb is live 2024-02-01 19:47:13,324 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:47:13,327 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb is live 2024-02-01 19:47:13,328 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:47:13,330 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb is live 2024-02-01 19:47:13,331 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:47:13,333 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb is live 2024-02-01 19:47:13,334 - root - INFO - running command.raw on 172.23.123.207: rm -rf /tmp/tmp* ; rm -rf /tmp/cbbackupmgr-staging;rm -rf /tmp/entbackup*;systemctl -q stop couchbase-server;umount -a -t nfs,nfs4 -f -l;service ntp restart ; apt-get purge -y 'couchbase*' > /dev/null; sleep 10;dpkg --purge $(dpkg -l | grep couchbase | awk '{print $2}' | xargs echo); sleep 10; rm /var/lib/dpkg/info/couchbase-server*; sleep 10;kill -9 `ps -ef |egrep couchbase|cut -f3 -d' '`;rm -rf /opt/couchbase > /dev/null && echo 1 || echo 0; dpkg -P couchbase-server; rm -rf /var/lib/dpkg/info/couchbase-server*;dpkg --configure -a; apt-get update; journalctl --vacuum-size=100M; journalctl --vacuum-time=10d; grep 'kernel.dmesg_restrict=0' /etc/sysctl.conf || (echo 'kernel.dmesg_restrict=0' >> /etc/sysctl.conf && service procps restart) ; rm -rf/opt/couchbase 2024-02-01 19:47:13,336 - root - INFO - running command.raw on 172.23.123.206: rm -rf /tmp/tmp* ; rm -rf /tmp/cbbackupmgr-staging;rm -rf /tmp/entbackup*;systemctl -q stop couchbase-server;umount -a -t nfs,nfs4 -f -l;service ntp restart ; apt-get purge -y 'couchbase*' > /dev/null; sleep 10;dpkg --purge $(dpkg -l | grep couchbase | awk '{print $2}' | xargs echo); sleep 10; rm /var/lib/dpkg/info/couchbase-server*; sleep 10;kill -9 `ps -ef |egrep couchbase|cut -f3 -d' '`;rm -rf /opt/couchbase > /dev/null && echo 1 || echo 0; dpkg -P couchbase-server; rm -rf /var/lib/dpkg/info/couchbase-server*;dpkg --configure -a; apt-get update; journalctl --vacuum-size=100M; journalctl --vacuum-time=10d; grep 'kernel.dmesg_restrict=0' /etc/sysctl.conf || (echo 'kernel.dmesg_restrict=0' >> /etc/sysctl.conf && service procps restart) ; rm -rf/opt/couchbase 2024-02-01 19:47:13,337 - root - INFO - running command.raw on 172.23.123.157: rm -rf /tmp/tmp* ; rm -rf /tmp/cbbackupmgr-staging;rm -rf /tmp/entbackup*;systemctl -q stop couchbase-server;umount -a -t nfs,nfs4 -f -l;service ntp restart ; apt-get purge -y 'couchbase*' > /dev/null; sleep 10;dpkg --purge $(dpkg -l | grep couchbase | awk '{print $2}' | xargs echo); sleep 10; rm /var/lib/dpkg/info/couchbase-server*; sleep 10;kill -9 `ps -ef |egrep couchbase|cut -f3 -d' '`;rm -rf /opt/couchbase > /dev/null && echo 1 || echo 0; dpkg -P couchbase-server; rm -rf /var/lib/dpkg/info/couchbase-server*;dpkg --configure -a; apt-get update; journalctl --vacuum-size=100M; journalctl --vacuum-time=10d; grep 'kernel.dmesg_restrict=0' /etc/sysctl.conf || (echo 'kernel.dmesg_restrict=0' >> /etc/sysctl.conf && service procps restart) ; rm -rf/opt/couchbase 2024-02-01 19:47:13,338 - root - INFO - running command.raw on 172.23.123.160: rm -rf /tmp/tmp* ; rm -rf /tmp/cbbackupmgr-staging;rm -rf /tmp/entbackup*;systemctl -q stop couchbase-server;umount -a -t nfs,nfs4 -f -l;service ntp restart ; apt-get purge -y 'couchbase*' > /dev/null; sleep 10;dpkg --purge $(dpkg -l | grep couchbase | awk '{print $2}' | xargs echo); sleep 10; rm /var/lib/dpkg/info/couchbase-server*; sleep 10;kill -9 `ps -ef |egrep couchbase|cut -f3 -d' '`;rm -rf /opt/couchbase > /dev/null && echo 1 || echo 0; dpkg -P couchbase-server; rm -rf /var/lib/dpkg/info/couchbase-server*;dpkg --configure -a; apt-get update; journalctl --vacuum-size=100M; journalctl --vacuum-time=10d; grep 'kernel.dmesg_restrict=0' /etc/sysctl.conf || (echo 'kernel.dmesg_restrict=0' >> /etc/sysctl.conf && service procps restart) ; rm -rf/opt/couchbase 2024-02-01 19:47:47,715 - root - INFO - command executed with root but got an error ['Failed to restart ntp.service: Unit ntp.service not found.', 'dpkg: error: --purge needs at least one package name argument', '', 'Type dpkg --help for help about installing and deinstalling packages [*];', "Use 'apt' or 'aptitude' for user-friendly package management;", 'Type dpkg -Dhelp for a list of dpkg debug flag values;', 'Type dpkg --force-help for a list of forcing options;', 'Type dpkg- ... 2024-02-01 19:47:47,716 - root - INFO - Waiting 10s for uninstall to complete on 172.23.123.206.. 2024-02-01 19:47:47,734 - root - INFO - command executed with root but got an error ['Failed to restart ntp.service: Unit ntp.service not found.', 'dpkg: error: --purge needs at least one package name argument', '', 'Type dpkg --help for help about installing and deinstalling packages [*];', "Use 'apt' or 'aptitude' for user-friendly package management;", 'Type dpkg -Dhelp for a list of dpkg debug flag values;', 'Type dpkg --force-help for a list of forcing options;', 'Type dpkg- ... 2024-02-01 19:47:47,734 - root - INFO - Waiting 10s for uninstall to complete on 172.23.123.160.. 2024-02-01 19:47:47,783 - root - INFO - command executed with root but got an error ['Failed to restart ntp.service: Unit ntp.service not found.', 'dpkg: error: --purge needs at least one package name argument', '', 'Type dpkg --help for help about installing and deinstalling packages [*];', "Use 'apt' or 'aptitude' for user-friendly package management;", 'Type dpkg -Dhelp for a list of dpkg debug flag values;', 'Type dpkg --force-help for a list of forcing options;', 'Type dpkg- ... 2024-02-01 19:47:47,784 - root - INFO - Waiting 10s for uninstall to complete on 172.23.123.157.. 2024-02-01 19:47:47,789 - root - INFO - command executed with root but got an error ['Failed to restart ntp.service: Unit ntp.service not found.', 'dpkg: error: --purge needs at least one package name argument', '', 'Type dpkg --help for help about installing and deinstalling packages [*];', "Use 'apt' or 'aptitude' for user-friendly package management;", 'Type dpkg -Dhelp for a list of dpkg debug flag values;', 'Type dpkg --force-help for a list of forcing options;', 'Type dpkg- ... 2024-02-01 19:47:47,791 - root - INFO - Waiting 10s for uninstall to complete on 172.23.123.207.. 2024-02-01 19:47:58,555 - root - INFO - Done with uninstall on 172.23.123.206. 2024-02-01 19:47:58,579 - root - INFO - Done with uninstall on 172.23.123.160. 2024-02-01 19:47:58,631 - root - INFO - Done with uninstall on 172.23.123.157. 2024-02-01 19:47:58,638 - root - INFO - Done with uninstall on 172.23.123.207. 2024-02-01 19:48:03,387 - root - INFO - Downloading build binary to /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb.. 2024-02-01 19:48:03,387 - root - INFO - Executing cmd on local : cd /tmp/; wget -Nq -O couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb http://172.23.126.166/builds/latestbuilds/couchbase-server/trinity/2090/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:48:09,153 - root - INFO - running command.raw on 172.23.123.207: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:48:09,164 - root - INFO - command executed with root but got an error ['wc: couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb: No such file or directory'] ... 2024-02-01 19:48:09,167 - root - INFO - Copying /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb to 172.23.123.207 2024-02-01 19:48:09,172 - root - INFO - running command.raw on 172.23.123.206: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:48:09,180 - root - INFO - command executed with root but got an error ['wc: couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb: No such file or directory'] ... 2024-02-01 19:48:09,182 - root - INFO - Copying /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb to 172.23.123.206 2024-02-01 19:48:09,185 - root - INFO - running command.raw on 172.23.123.157: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:48:09,193 - root - INFO - command executed with root but got an error ['wc: couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb: No such file or directory'] ... 2024-02-01 19:48:09,195 - root - INFO - Copying /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb to 172.23.123.157 2024-02-01 19:48:09,199 - root - INFO - running command.raw on 172.23.123.160: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:48:09,207 - root - INFO - command executed with root but got an error ['wc: couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb: No such file or directory'] ... 2024-02-01 19:48:09,208 - root - INFO - Copying /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb to 172.23.123.160 2024-02-01 19:52:21,712 - root - INFO - Done copying build to 172.23.123.207. 2024-02-01 19:52:22,223 - root - INFO - Done copying build to 172.23.123.157. 2024-02-01 19:52:22,836 - root - INFO - Done copying build to 172.23.123.206. 2024-02-01 19:52:23,798 - root - INFO - Done copying build to 172.23.123.160. 2024-02-01 19:52:23,799 - root - INFO - running command.raw on 172.23.123.207: ls -lh /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,809 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,814 - root - INFO - running command.raw on 172.23.123.207: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,863 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,864 - root - INFO - running command.raw on 172.23.123.206: ls -lh /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,873 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,876 - root - INFO - running command.raw on 172.23.123.206: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,923 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,924 - root - INFO - running command.raw on 172.23.123.157: ls -lh /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,934 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,937 - root - INFO - running command.raw on 172.23.123.157: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,984 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,985 - root - INFO - running command.raw on 172.23.123.160: ls -lh /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:23,995 - root - INFO - command executed successfully with root 2024-02-01 19:52:23,999 - root - INFO - running command.raw on 172.23.123.160: cd /tmp/ && wc -c couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb 2024-02-01 19:52:24,046 - root - INFO - command executed successfully with root 2024-02-01 19:52:24,048 - root - INFO - running command.raw on 172.23.123.207: rm -f /etc/couchbase.d/config_profile 2024-02-01 19:52:24,050 - root - INFO - running command.raw on 172.23.123.206: rm -f /etc/couchbase.d/config_profile 2024-02-01 19:52:24,053 - root - INFO - running command.raw on 172.23.123.157: rm -f /etc/couchbase.d/config_profile 2024-02-01 19:52:24,055 - root - INFO - running command.raw on 172.23.123.160: rm -f /etc/couchbase.d/config_profile 2024-02-01 19:52:24,064 - root - INFO - command executed successfully with root 2024-02-01 19:52:24,064 - root - INFO - running command.raw on 172.23.123.157: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:52:24,069 - root - INFO - command executed successfully with root 2024-02-01 19:52:24,071 - root - INFO - running command.raw on 172.23.123.207: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:52:24,073 - root - INFO - command executed successfully with root 2024-02-01 19:52:24,074 - root - INFO - running command.raw on 172.23.123.206: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:52:24,095 - root - INFO - command executed successfully with root 2024-02-01 19:52:24,096 - root - INFO - running command.raw on 172.23.123.160: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:53:11,727 - root - INFO - command executed successfully with root 2024-02-01 19:53:11,728 - root - INFO - running command.raw on 172.23.123.160: usermod -aG adm couchbase && systemctl -q is-active couchbase-server.service && echo 1 || echo 0 2024-02-01 19:53:11,745 - root - INFO - command executed successfully with root 2024-02-01 19:53:11,745 - root - INFO - Done with install on 172.23.123.160. 2024-02-01 19:53:11,746 - root - INFO - Waiting for couchbase to be reachable 2024-02-01 19:53:11,749 - root - ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:12,719 - root - INFO - command executed successfully with root 2024-02-01 19:53:12,720 - root - INFO - running command.raw on 172.23.123.207: usermod -aG adm couchbase && systemctl -q is-active couchbase-server.service && echo 1 || echo 0 2024-02-01 19:53:12,778 - root - INFO - command executed successfully with root 2024-02-01 19:53:12,778 - root - INFO - Done with install on 172.23.123.207. 2024-02-01 19:53:12,779 - root - INFO - Waiting for couchbase to be reachable 2024-02-01 19:53:12,783 - root - ERROR - socket error while connecting to http://172.23.123.207:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:12,997 - root - INFO - command executed with root but got an error ['E: Sub-process /usr/bin/dpkg returned an error code (1)'] ... 2024-02-01 19:53:12,997 - root - INFO - Waiting 20s for install to complete on 172.23.123.206.. 2024-02-01 19:53:14,755 - root - ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:14,970 - root - INFO - command executed successfully with root 2024-02-01 19:53:14,971 - root - INFO - running command.raw on 172.23.123.157: usermod -aG adm couchbase && systemctl -q is-active couchbase-server.service && echo 1 || echo 0 2024-02-01 19:53:14,987 - root - INFO - command executed successfully with root 2024-02-01 19:53:14,988 - root - INFO - Done with install on 172.23.123.157. 2024-02-01 19:53:14,990 - root - INFO - Waiting for couchbase to be reachable 2024-02-01 19:53:14,992 - root - ERROR - socket error while connecting to http://172.23.123.157:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:15,788 - root - ERROR - socket error while connecting to http://172.23.123.207:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:17,998 - root - ERROR - socket error while connecting to http://172.23.123.157:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:20,763 - root - ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:21,797 - root - ERROR - socket error while connecting to http://172.23.123.207:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:24,005 - root - ERROR - socket error while connecting to http://172.23.123.157:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:53:32,780 - root - ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:53:32,788 - root - INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.123.160 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2024-02-01 19:53:33,018 - root - INFO - running command.raw on 172.23.123.206: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:53:33,586 - root - INFO - command executed successfully with root 2024-02-01 19:53:33,591 - root - ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:53:33,591 - root - INFO - running command.raw on 172.23.123.160: rm -rf /data/* 2024-02-01 19:53:33,599 - root - INFO - command executed successfully with root 2024-02-01 19:53:33,600 - root - INFO - running command.raw on 172.23.123.160: chown -R couchbase:couchbase /data 2024-02-01 19:53:33,656 - root - INFO - command executed with root but got an error ["chown: cannot access '/data': No such file or directory"] ... 2024-02-01 19:53:33,657 - root - INFO - /nodes/self/controller/settings : index_path=%2Fdata 2024-02-01 19:53:33,665 - root - ERROR - POST http://172.23.123.160:8091//nodes/self/controller/settings body: index_path=%2Fdata headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: unknown b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' auth: Administrator:password 2024-02-01 19:53:33,665 - root - ERROR - Unable to set data_path : b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' 2024-02-01 19:53:33,720 - root - INFO - command executed with root but got an error ['E: Sub-process /usr/bin/dpkg returned an error code (1)'] ... 2024-02-01 19:53:33,721 - root - INFO - Waiting 20s for install to complete on 172.23.123.206.. 2024-02-01 19:53:33,814 - root - ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:53:33,816 - root - INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.123.207 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2024-02-01 19:53:34,629 - root - INFO - command executed successfully with root 2024-02-01 19:53:34,634 - root - ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:53:34,635 - root - INFO - running command.raw on 172.23.123.207: rm -rf /data/* 2024-02-01 19:53:34,643 - root - INFO - command executed successfully with root 2024-02-01 19:53:34,643 - root - INFO - running command.raw on 172.23.123.207: chown -R couchbase:couchbase /data 2024-02-01 19:53:34,672 - root - INFO - Setting INDEX memory quota as 256 MB on 172.23.123.160 2024-02-01 19:53:34,673 - root - INFO - pools/default params : indexMemoryQuota=256 2024-02-01 19:53:34,682 - root - INFO - Setting KV memory quota as 8304 MB on 172.23.123.160 2024-02-01 19:53:34,683 - root - INFO - pools/default params : memoryQuota=8304 2024-02-01 19:53:34,688 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv', 'index', 'n1ql']) 2024-02-01 19:53:34,688 - root - INFO - node/controller/setupServices params on 172.23.123.160: 8091:hostname=None&user=Administrator&password=password&services=kv%2Cindex%2Cn1ql 2024-02-01 19:53:34,695 - root - INFO - command executed with root but got an error ["chown: cannot access '/data': No such file or directory"] ... 2024-02-01 19:53:34,696 - root - INFO - /nodes/self/controller/settings : index_path=%2Fdata 2024-02-01 19:53:34,703 - root - ERROR - POST http://172.23.123.207:8091//nodes/self/controller/settings body: index_path=%2Fdata headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: unknown b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' auth: Administrator:password 2024-02-01 19:53:34,704 - root - ERROR - Unable to set data_path : b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' 2024-02-01 19:53:34,724 - root - INFO - settings/indexes params : storageMode=plasma 2024-02-01 19:53:34,732 - root - INFO - --> in init_cluster...Administrator,password,8091 2024-02-01 19:53:34,733 - root - INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password 2024-02-01 19:53:34,899 - root - INFO - --> status:True 2024-02-01 19:53:34,899 - root - INFO - Done with init on 172.23.123.160. 2024-02-01 19:53:34,900 - root - INFO - running command.raw on 172.23.123.160: ls -td /tmp/couchbase*.deb | awk 'NR>2' | xargs rm -f 2024-02-01 19:53:34,981 - root - INFO - command executed successfully with root 2024-02-01 19:53:34,982 - root - INFO - Done with cleanup on 172.23.123.160. 2024-02-01 19:53:35,709 - root - INFO - Setting INDEX memory quota as 256 MB on 172.23.123.207 2024-02-01 19:53:35,710 - root - INFO - pools/default params : indexMemoryQuota=256 2024-02-01 19:53:35,719 - root - INFO - Setting KV memory quota as 8304 MB on 172.23.123.207 2024-02-01 19:53:35,720 - root - INFO - pools/default params : memoryQuota=8304 2024-02-01 19:53:35,725 - root - INFO - --> init_node_services(Administrator,password,None,8091,['n1ql', 'kv', 'index']) 2024-02-01 19:53:35,726 - root - INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=None&user=Administrator&password=password&services=n1ql%2Ckv%2Cindex 2024-02-01 19:53:35,762 - root - INFO - settings/indexes params : storageMode=plasma 2024-02-01 19:53:35,774 - root - INFO - --> in init_cluster...Administrator,password,8091 2024-02-01 19:53:35,774 - root - INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password 2024-02-01 19:53:35,919 - root - INFO - --> status:True 2024-02-01 19:53:35,920 - root - INFO - Done with init on 172.23.123.207. 2024-02-01 19:53:35,921 - root - INFO - running command.raw on 172.23.123.207: ls -td /tmp/couchbase*.deb | awk 'NR>2' | xargs rm -f 2024-02-01 19:53:36,011 - root - INFO - command executed successfully with root 2024-02-01 19:53:36,011 - root - INFO - Done with cleanup on 172.23.123.207. 2024-02-01 19:53:36,022 - root - ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:53:36,024 - root - INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.123.157 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2024-02-01 19:53:36,844 - root - INFO - command executed successfully with root 2024-02-01 19:53:36,849 - root - ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:53:36,850 - root - INFO - running command.raw on 172.23.123.157: rm -rf /data/* 2024-02-01 19:53:36,858 - root - INFO - command executed successfully with root 2024-02-01 19:53:36,858 - root - INFO - running command.raw on 172.23.123.157: chown -R couchbase:couchbase /data 2024-02-01 19:53:36,913 - root - INFO - command executed with root but got an error ["chown: cannot access '/data': No such file or directory"] ... 2024-02-01 19:53:36,915 - root - INFO - /nodes/self/controller/settings : index_path=%2Fdata 2024-02-01 19:53:36,922 - root - ERROR - POST http://172.23.123.157:8091//nodes/self/controller/settings body: index_path=%2Fdata headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: unknown b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' auth: Administrator:password 2024-02-01 19:53:36,923 - root - ERROR - Unable to set data_path : b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' 2024-02-01 19:53:37,930 - root - INFO - Setting INDEX memory quota as 256 MB on 172.23.123.157 2024-02-01 19:53:37,931 - root - INFO - pools/default params : indexMemoryQuota=256 2024-02-01 19:53:37,940 - root - INFO - Setting KV memory quota as 8304 MB on 172.23.123.157 2024-02-01 19:53:37,940 - root - INFO - pools/default params : memoryQuota=8304 2024-02-01 19:53:37,946 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv', 'index', 'n1ql']) 2024-02-01 19:53:37,946 - root - INFO - node/controller/setupServices params on 172.23.123.157: 8091:hostname=None&user=Administrator&password=password&services=kv%2Cindex%2Cn1ql 2024-02-01 19:53:37,982 - root - INFO - settings/indexes params : storageMode=plasma 2024-02-01 19:53:37,993 - root - INFO - --> in init_cluster...Administrator,password,8091 2024-02-01 19:53:37,993 - root - INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password 2024-02-01 19:53:38,149 - root - INFO - --> status:True 2024-02-01 19:53:38,150 - root - INFO - Done with init on 172.23.123.157. 2024-02-01 19:53:38,150 - root - INFO - running command.raw on 172.23.123.157: ls -td /tmp/couchbase*.deb | awk 'NR>2' | xargs rm -f 2024-02-01 19:53:38,233 - root - INFO - command executed successfully with root 2024-02-01 19:53:38,233 - root - INFO - Done with cleanup on 172.23.123.157. 2024-02-01 19:53:53,741 - root - INFO - running command.raw on 172.23.123.206: DEBIAN_FRONTEND='noninteractive' apt-get -y -f install /tmp/couchbase-server-enterprise_7.6.0-2090-linux_amd64.deb > /dev/null && echo 1 || echo 0 2024-02-01 19:53:54,456 - root - INFO - command executed with root but got an error ['E: Sub-process /usr/bin/dpkg returned an error code (1)'] ... 2024-02-01 19:53:54,457 - root - INFO - Waiting 20s for install to complete on 172.23.123.206.. 2024-02-01 19:54:14,476 - root - INFO - running command.raw on 172.23.123.206: usermod -aG adm couchbase && systemctl -q is-active couchbase-server.service && echo 1 || echo 0 2024-02-01 19:54:14,495 - root - INFO - command executed successfully with root 2024-02-01 19:54:14,496 - root - INFO - running command.raw on 172.23.123.206: systemctl restart couchbase-server.service 2024-02-01 19:54:14,551 - root - INFO - command executed successfully with root 2024-02-01 19:54:14,552 - root - INFO - Waiting 10s for couchbase-service to become active on 172.23.123.206.. 2024-02-01 19:54:24,562 - root - INFO - running command.raw on 172.23.123.206: usermod -aG adm couchbase && systemctl -q is-active couchbase-server.service && echo 1 || echo 0 2024-02-01 19:54:24,581 - root - INFO - command executed successfully with root 2024-02-01 19:54:24,582 - root - INFO - Done with install on 172.23.123.206. 2024-02-01 19:54:24,583 - root - INFO - Waiting for couchbase to be reachable 2024-02-01 19:54:24,585 - root - ERROR - socket error while connecting to http://172.23.123.206:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:54:27,593 - root - ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:54:27,594 - root - INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.123.206 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2024-02-01 19:54:28,378 - root - INFO - command executed successfully with root 2024-02-01 19:54:28,383 - root - ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:54:28,384 - root - INFO - running command.raw on 172.23.123.206: rm -rf /data/* 2024-02-01 19:54:28,391 - root - INFO - command executed successfully with root 2024-02-01 19:54:28,393 - root - INFO - running command.raw on 172.23.123.206: chown -R couchbase:couchbase /data 2024-02-01 19:54:28,445 - root - INFO - command executed with root but got an error ["chown: cannot access '/data': No such file or directory"] ... 2024-02-01 19:54:28,446 - root - INFO - /nodes/self/controller/settings : index_path=%2Fdata 2024-02-01 19:54:28,461 - root - ERROR - POST http://172.23.123.206:8091//nodes/self/controller/settings body: index_path=%2Fdata headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: unknown b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' auth: Administrator:password 2024-02-01 19:54:28,462 - root - ERROR - Unable to set data_path : b'["Could not set the storage path. It must be a directory writable by \'couchbase\' user."]' 2024-02-01 19:54:29,468 - root - INFO - Setting INDEX memory quota as 256 MB on 172.23.123.206 2024-02-01 19:54:29,469 - root - INFO - pools/default params : indexMemoryQuota=256 2024-02-01 19:54:29,479 - root - INFO - Setting KV memory quota as 8304 MB on 172.23.123.206 2024-02-01 19:54:29,479 - root - INFO - pools/default params : memoryQuota=8304 2024-02-01 19:54:29,485 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv', 'index', 'n1ql']) 2024-02-01 19:54:29,486 - root - INFO - node/controller/setupServices params on 172.23.123.206: 8091:hostname=None&user=Administrator&password=password&services=kv%2Cindex%2Cn1ql 2024-02-01 19:54:29,523 - root - INFO - settings/indexes params : storageMode=plasma 2024-02-01 19:54:29,532 - root - INFO - --> in init_cluster...Administrator,password,8091 2024-02-01 19:54:29,533 - root - INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password 2024-02-01 19:54:29,690 - root - INFO - --> status:True 2024-02-01 19:54:29,690 - root - INFO - Done with init on 172.23.123.206. 2024-02-01 19:54:29,691 - root - INFO - running command.raw on 172.23.123.206: ls -td /tmp/couchbase*.deb | awk 'NR>2' | xargs rm -f 2024-02-01 19:54:29,785 - root - INFO - command executed successfully with root 2024-02-01 19:54:29,787 - root - INFO - Done with cleanup on 172.23.123.206. 2024-02-01 19:54:34,174 - root - INFO - ---------------------------------------------------------------------------------------------------- 2024-02-01 19:54:34,251 - root - INFO - cluster:C1 node:172.23.123.207:8091 version:7.6.0-2090-enterprise aFamily:inet services:['index', 'kv', 'n1ql'] 2024-02-01 19:54:34,252 - root - INFO - cluster:C2 node:172.23.123.206:8091 version:7.6.0-2090-enterprise aFamily:inet services:['index', 'kv', 'n1ql'] 2024-02-01 19:54:34,252 - root - INFO - cluster:C3 node:172.23.123.157:8091 version:7.6.0-2090-enterprise aFamily:inet services:['index', 'kv', 'n1ql'] 2024-02-01 19:54:34,252 - root - INFO - cluster:C4 node:172.23.123.160:8091 version:7.6.0-2090-enterprise aFamily:inet services:['index', 'kv', 'n1ql'] 2024-02-01 19:54:34,253 - root - INFO - ---------------------------------------------------------------------------------------------------- 2024-02-01 19:54:34,253 - root - INFO - ---------------------------------------------------------------------------------------------------- 2024-02-01 19:54:34,253 - root - INFO - ---------------------------------------------------------------------------------------------------- 2024-02-01 19:54:34,254 - root - INFO - INSTALL COMPLETED ON: 172.23.123.207 2024-02-01 19:54:34,254 - root - INFO - INSTALL COMPLETED ON: 172.23.123.206 2024-02-01 19:54:34,254 - root - INFO - INSTALL COMPLETED ON: 172.23.123.157 2024-02-01 19:54:34,254 - root - INFO - INSTALL COMPLETED ON: 172.23.123.160 2024-02-01 19:54:34,255 - root - INFO - ---------------------------------------------------------------------------------------------------- 2024-02-01 19:54:34,255 - root - INFO - TOTAL INSTALL TIME = 446 seconds success INFO:root:SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.123.160 as root INFO:root:SSH Connected to 172.23.123.206 as root INFO:root:SSH Connected to 172.23.123.207 as root INFO:root:SSH Connected to 172.23.123.157 as root INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.160: iptables -F INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.206: iptables -F INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.157: iptables -F INFO:root:extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 INFO:root:running command.raw on 172.23.123.207: iptables -F INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... INFO:root:command executed with root but got an error ['bash: line 1: iptables: command not found'] ... 172.23.123.160 bash: line 1: iptables: command not found 172.23.123.206 bash: line 1: iptables: command not found 172.23.123.157 bash: line 1: iptables: command not found 172.23.123.207 bash: line 1: iptables: command not found Need to set ALLOW_HTP back to True to do git pull branch Submodule path 'java_sdk_client': checked out 'de89b059ce28567dbac18afb032271a4eaa674ff' Submodule path 'lib/capellaAPI': checked out '9daae78719a7e4e5889ea9553e5014e666870f84' Submodule path 'magma_loader/DocLoader': checked out '0f5f758a9a89ecb5bc4ac20e5d4a15c704ec89f7' Requirement already satisfied: pip in /usr/local/lib/python3.7/site-packages (23.3.2) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: swig in /usr/local/lib/python3.7/site-packages (4.2.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Requirement already satisfied: boto3 in /usr/local/lib/python3.7/site-packages (1.26.73) Requirement already satisfied: faiss-cpu in /usr/local/lib/python3.7/site-packages (1.7.4) Requirement already satisfied: wget in /usr/local/lib/python3.7/site-packages (3.2) Requirement already satisfied: h5py in /usr/local/lib/python3.7/site-packages (3.8.0) Requirement already satisfied: botocore<1.30.0,>=1.29.73 in /usr/local/lib/python3.7/site-packages (from boto3) (1.29.73) Requirement already satisfied: jmespath<2.0.0,>=0.7.1 in /usr/local/lib/python3.7/site-packages (from boto3) (0.10.0) Requirement already satisfied: s3transfer<0.7.0,>=0.6.0 in /usr/local/lib/python3.7/site-packages (from boto3) (0.6.0) Requirement already satisfied: numpy>=1.14.5 in /usr/local/lib/python3.7/site-packages (from h5py) (1.21.6) Requirement already satisfied: python-dateutil<3.0.0,>=2.1 in /usr/local/lib/python3.7/site-packages (from botocore<1.30.0,>=1.29.73->boto3) (2.8.1) Requirement already satisfied: urllib3<1.27,>=1.25.4 in /usr/local/lib/python3.7/site-packages (from botocore<1.30.0,>=1.29.73->boto3) (1.25.8) Requirement already satisfied: six>=1.5 in /usr/local/lib/python3.7/site-packages (from python-dateutil<3.0.0,>=2.1->botocore<1.30.0,>=1.29.73->boto3) (1.14.0) WARNING: Running pip as the 'root' user can result in broken permissions and conflicting behaviour with the system package manager. It is recommended to use a virtual environment instead: https://pip.pypa.io/warnings/venv Loaded plugins: fastestmirror, langpacks, product-id, search-disabled-repos, : subscription-manager This system is not registered with an entitlement server. You can use subscription-manager to register. Loading mirror speeds from cached hostfile * base: ix-denver.mm.fcix.net * extras: mirrors.raystedman.org * updates: ix-denver.mm.fcix.net Package maven-3.0.5-17.el7.noarch already installed and latest version Nothing to do find: ‘/root/jenkins/workspace/’: No such file or directory find: ‘/root/workspace/*/logs/*’: No such file or directory find: ‘/root/workspace/’: No such file or directory bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 python3: no process found python3 testrunner.py -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -c conf/gsi/py-gsi-plasma.conf -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 INFO:__main__:Checking arguments... INFO:__main__:Conf filename: conf/gsi/py-gsi-plasma.conf INFO:__main__:Test prefix: gsi.collections_plasma.PlasmaCollectionsTests INFO:__main__:TestRunner: start... INFO:__main__:Global Test input params: INFO:__main__: Number of tests initially selected before GROUP filters: 36 INFO:__main__:--> Running test: gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True INFO:__main__:Logs folder: /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_1 *** TestRunner *** {'GROUP': 'SIMPLE', 'bucket_size': '5000', 'cluster_name': 'testexec.25952', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'get-cbcollect-info': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'nodes_init': '3', 'num_nodes': 4, 'reset_services': 'True', 'services_init': 'kv:n1ql-kv:n1ql-index', 'sirius_url': 'http://172.23.120.103:4000', 'spec': 'py-gsi-plasma', 'test_timeout': '240'} Only cases in GROUPs 'SIMPLE' will be executed test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=30,percent_update=30,percent_delete=30,moi_snapshot_interval=150000,concur_system_failure=False,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=30,percent_update=30,percent_delete=30,system_failure=disk_failure,moi_snapshot_interval=150000,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=30,percent_update=30,percent_delete=30,system_failure=disk_full,moi_snapshot_interval=150000,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=restart_couchbase,moi_snapshot_interval=150000,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=net_packet_loss,moi_snapshot_interval=150000,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=network_delay,moi_snapshot_interval=150000,GROUP=G1' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=disk_readonly,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=limit_file_limits,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=limit_file_size_limit,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=extra_files_in_log_dir,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=dummy_file_in_log_dir,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=empty_files_in_log_dir,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=stress_cpu,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes,default_bucket=false,force_clean=true,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index-index,all_collections=True,test_timeout=450,bucket_size=5000,num_items_in_collection=1000000,drop_sleep=5,percent_update=30,percent_delete=30,system_failure=stress_ram,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group test 'gsi.collections_plasma.PlasmaCollectionsTests.test_autocompaction_forestdb,default_bucket=false,force_clean=true,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql:index-kv:n1ql:index-kv:n1ql:index-kv:n1ql:index,all_collections=True,bucket_size=5000,num_items_in_collection=5000,num_scopes=10,num_collections=10,num_of_indexes=1000,test_timeout=1200,drop_sleep=2,compact_sleep_duration=300,moi_snapshot_interval=150000,GROUP=G2' skipped, is not in the requested group Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_1 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 1, 'total_testcases': 21, 'last_case_fail': 'False', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_1'} Run before suite setup for gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple lib/couchbase_helper/tuq_generators.py:113: DeprecationWarning: invalid escape sequence \w regex = re.compile("[\w']+\.[\w']+") lib/couchbase_helper/tuq_generators.py:363: DeprecationWarning: invalid escape sequence \[ diff = set(order_clause.split(',')) - set(re.compile('doc\["[\w\']+"\]').findall(select_clause)) pytests/basetestcase.py:3276: DeprecationWarning: invalid escape sequence \ copy_servers, root_cn='Root\ Authority', type="openssl", suite_setUp (gsi.collections_plasma.PlasmaCollectionsTests) ... -->before_suite_name:gsi.collections_plasma.PlasmaCollectionsTests.suite_setUp,suite: ]> 2024-02-01 19:54:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:54:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:54:59 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running? 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.6.0-2090-enterprise 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [basetestcase.setUp] ============== basetestcase setup was started for test #1 suite_setUp============== 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [collections_plasma.tearDown] ============== PlasmaCollectionsTests tearDown has started ============== 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of index nodes in cluster: [ip:172.23.123.207 port:8091 ssh_username:root] 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:00 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] POST http://172.23.123.207:8091/diag/eval/ body: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))). headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: status: 400, content: b'API is accessible from localhost only' b'API is accessible from localhost only' auth: Administrator:password 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: False content: API is accessible from localhost only command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))). 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: zgrep "panic" API is accessible from localhost only/indexer.log* | wc -l 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed with root but got an error ['gzip: API.gz: No such file or directory', 'gzip: is.gz: No such file or directory', 'gzip: accessible.gz: No such file or directory', 'gzip: from.gz: No such file or directory', 'gzip: localhost.gz: No such file or directory', 'gzip: only/indexer.log*.gz: No such file or directory'] ... 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [basetestcase.get_nodes_from_services_map] list of kv nodes in cluster: [ip:172.23.123.207 port:8091 ssh_username:root] 2024-02-01 19:55:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:01 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] POST http://172.23.123.207:8091/diag/eval/ body: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))). headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: status: 400, content: b'API is accessible from localhost only' b'API is accessible from localhost only' auth: Administrator:password 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: False content: API is accessible from localhost only command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))). 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: zgrep "panic" API is accessible from localhost only/projector.log* | wc -l 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed with root but got an error ['gzip: API.gz: No such file or directory', 'gzip: is.gz: No such file or directory', 'gzip: accessible.gz: No such file or directory', 'gzip: from.gz: No such file or directory', 'gzip: localhost.gz: No such file or directory', 'gzip: only/projector.log*.gz: No such file or directory'] ... 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] ------- Cluster statistics ------- 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] 172.23.123.207:8091 => {'services': ['index', 'kv', 'n1ql'], 'cpu_utilization': 0.5499999970197678, 'mem_free': 15604961280, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [basetestcase.print_cluster_stats] --- End of cluster statistics --- 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:01 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:02 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:55:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:09 | WARNING | MainProcess | MainThread | [basetestcase.tearDown] CLEANUP WAS SKIPPED 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all ssh connections 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [basetestcase.tearDown] closing all memcached connections Cluster instance shutdown with force 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [collections_plasma.tearDown] 'PlasmaCollectionsTests' object has no attribute 'index_nodes' 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [collections_plasma.tearDown] ============== PlasmaCollectionsTests tearDown has completed ============== 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [on_prem_rest_client.set_internalSetting] Update internal setting magmaMinMemoryQuota=256 2024-02-01 19:55:09 | INFO | MainProcess | MainThread | [basetestcase.setUp] Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [basetestcase.setUp] initializing cluster 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:17 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:18 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:18 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:18 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:18 | INFO | MainProcess | MainThread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:55:18 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl stop couchbase-server.service 2024-02-01 19:55:19 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:19 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:19 | INFO | MainProcess | MainThread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:55:19 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl start couchbase-server.service 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:20 | INFO | MainProcess | MainThread | [remote_util.sleep] 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:55:25 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:55:25 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:25 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:55:25 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.207: with pid 2778858 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [basetestcase.start_server] Couchbase started 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:55:26 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:55:27 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:27 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:27 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:27 | INFO | MainProcess | MainThread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:55:27 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl stop couchbase-server.service 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty 2024-02-01 19:55:28 | ERROR | MainProcess | MainThread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:28 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl start couchbase-server.service 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:29 | INFO | MainProcess | MainThread | [remote_util.sleep] 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.206: with pid 3889193 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [basetestcase.start_server] Couchbase started 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:55:34 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:55:35 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:35 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:35 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:55:35 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:55:35 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:36 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:36 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:36 | INFO | MainProcess | MainThread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:55:36 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl stop couchbase-server.service 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:55:37 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl start couchbase-server.service 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:38 | INFO | MainProcess | MainThread | [remote_util.sleep] 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.157: with pid 3240347 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [basetestcase.start_server] Couchbase started 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:55:43 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:44 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:45 | INFO | MainProcess | MainThread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:55:45 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl stop couchbase-server.service 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:55:46 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.is_couchbase_installed] 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl start couchbase-server.service 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:47 | INFO | MainProcess | MainThread | [remote_util.sleep] 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.160: with pid 3245203 2024-02-01 19:55:52 | INFO | MainProcess | MainThread | [basetestcase.start_server] Couchbase started 2024-02-01 19:55:52 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:55:52 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:55:53 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:55:53 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:55:56 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:56:02 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:56:03 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '40', 'memoryTotal': 16747913216, 'memoryFree': 15818694656, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:56:03 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:56:03 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:56:04 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15794556928, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:56:04 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:56:04 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:56:05 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15800430592, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:56:05 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:56:05 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:56:06 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15570399232, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:56:06 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:56:06 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:56:07 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:56:07 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** 2024-02-01 19:56:07 | ERROR | MainProcess | MainThread | [on_prem_rest_client._http_request] DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2024-02-01 19:56:07 | INFO | MainProcess | MainThread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."' 2024-02-01 19:56:07 | INFO | MainProcess | MainThread | [basetestcase.sleep] sleep for 5 secs. ... 2024-02-01 19:56:12 | INFO | MainProcess | MainThread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user **** 2024-02-01 19:56:12 | INFO | MainProcess | MainThread | [basetestcase.setUp] done initializing cluster 2024-02-01 19:56:12 | INFO | MainProcess | MainThread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.6.0-2090-enterprise 2024-02-01 19:56:13 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.123.206:8091 to cluster 2024-02-01 19:56:13 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.add_node] adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 2024-02-01 19:56:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] rebalance progress took 10.04 seconds 2024-02-01 19:56:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2024-02-01 19:56:37 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.123.157:8091 to cluster 2024-02-01 19:56:37 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.add_node] adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 2024-02-01 19:56:47 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] rebalance progress took 10.04 seconds 2024-02-01 19:56:47 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2024-02-01 19:57:01 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.get_nodes] Node 172.23.123.157 not part of cluster inactiveAdded 2024-02-01 19:57:01 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.get_nodes] Node 172.23.123.206 not part of cluster inactiveAdded 2024-02-01 19:57:01 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} 2024-02-01 19:57:12 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.rebalance] rebalance operation started 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._rebalance_status_and_progress] {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed 2024-02-01 19:57:22 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] Latest logs from UI on 172.23.123.207: 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846232073, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = ab9cbd1797dc8fa91a66c6e4fcd0bd83', 'serverTime': '2024-02-01T19:57:12.073Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706846232040, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T19:57:12.040Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846232027, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = ab9cbd1797dc8fa91a66c6e4fcd0bd83", 'serverTime': '2024-02-01T19:57:12.027Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706846231879, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T19:57:11.879Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846231875, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T19:57:11.875Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706846222101, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T19:57:02.101Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706846221876, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T19:57:01.876Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846221862, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T19:57:01.862Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706846221842, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T19:57:01.842Z'} 2024-02-01 19:57:22 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706846218548, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T19:56:58.548Z'} ./lib/log_scanner.py:15: DeprecationWarning: invalid escape sequence \s "Basic\s[a-zA-Z]\{10,\}==", ./lib/log_scanner.py:16: DeprecationWarning: invalid escape sequence \[ "Menelaus-Auth-User:\[", Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: suite_setUp (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 142.644s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... [, , , , , ] Thu Feb 1 19:57:22 2024 [>, , , , , , , , , , , , , , , ] Cluster instance shutdown with force -->result: 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 19:57:22 2024 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.is_ns_server_running] -->is_ns_server_running? 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.6.0-2090-enterprise 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.setUp] ============== basetestcase setup was started for test #1 test_system_failure_create_drop_indexes_simple============== 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [collections_plasma.tearDown] ============== PlasmaCollectionsTests tearDown has started ============== 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes] Node 172.23.123.157 not part of cluster inactiveAdded 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes] Node 172.23.123.206 not part of cluster inactiveAdded 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes] Node 172.23.123.157 not part of cluster inactiveAdded 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes] Node 172.23.123.206 not part of cluster inactiveAdded 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] cannot find service node index in cluster 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics ------- 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 3.905781156976445, 'mem_free': 15806267392, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.5176035110164463, 'mem_free': 15785758720, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 1.074999999254942, 'mem_free': 15649509376, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics --- 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:57:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:57:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:30 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED 2024-02-01 19:57:30 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all ssh connections 2024-02-01 19:57:30 | INFO | MainProcess | test_thread | [basetestcase.tearDown] closing all memcached connections Cluster instance shutdown with force 2024-02-01 19:57:30 | INFO | MainProcess | test_thread | [collections_plasma.tearDown] 'PlasmaCollectionsTests' object has no attribute 'index_nodes' 2024-02-01 19:57:30 | INFO | MainProcess | test_thread | [collections_plasma.tearDown] ============== PlasmaCollectionsTests tearDown has completed ============== 2024-02-01 19:57:31 | INFO | MainProcess | test_thread | [on_prem_rest_client.set_internalSetting] Update internal setting magmaMinMemoryQuota=256 2024-02-01 19:57:31 | INFO | MainProcess | test_thread | [basetestcase.setUp] Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 2024-02-01 19:57:39 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster 2024-02-01 19:57:39 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:57:40 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:41 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:41 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:57:41 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:57:41 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl stop couchbase-server.service 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:57:42 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl start couchbase-server.service 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:43 | INFO | MainProcess | test_thread | [remote_util.sleep] 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.207: with pid 2781149 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:57:48 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:49 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:57:50 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:57:50 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl stop couchbase-server.service 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty 2024-02-01 19:57:52 | ERROR | MainProcess | test_thread | [remote_util.log_command_output] rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:52 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl start couchbase-server.service 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:53 | INFO | MainProcess | test_thread | [remote_util.sleep] 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.206: with pid 3891390 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:58 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:57:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl stop couchbase-server.service 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl start couchbase-server.service 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:02 | INFO | MainProcess | test_thread | [remote_util.sleep] 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.157: with pid 3242519 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2024-02-01 19:58:07 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:58:08 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:09 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:09 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:58:09 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2024-02-01 19:58:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl stop couchbase-server.service 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:10 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Starting couchbase server 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl start couchbase-server.service 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:11 | INFO | MainProcess | test_thread | [remote_util.sleep] 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Couchbase server status: [] 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [remote_util.is_process_running] Checking for process beam.smp on linux 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [remote_util.is_process_running] process beam.smp is running on 172.23.123.160: with pid 3247284 2024-02-01 19:58:16 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2024-02-01 19:58:16 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:58:16 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:58:16 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:58:16 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:58:19 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused 2024-02-01 19:58:25 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:58:26 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '38', 'memoryTotal': 16747913216, 'memoryFree': 15828008960, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:58:26 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_node_services] node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:26 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:58:27 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '28', 'memoryTotal': 16747913216, 'memoryFree': 15783038976, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:58:27 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:58:27 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '23', 'memoryTotal': 16747917312, 'memoryFree': 15802736640, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:58:27 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:58:27 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:58:28 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '13', 'memoryTotal': 16747917312, 'memoryFree': 15776169984, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} 2024-02-01 19:58:28 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._http_request] GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=8560 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2024-02-01 19:58:28 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.init_cluster] --> status:True 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.diag_eval] diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). 2024-02-01 19:58:29 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2024-02-01 19:58:29 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** 2024-02-01 19:58:29 | ERROR | MainProcess | test_thread | [on_prem_rest_client._http_request] DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2024-02-01 19:58:29 | INFO | MainProcess | test_thread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."' 2024-02-01 19:58:30 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 5 secs. ... 2024-02-01 19:58:35 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user **** 2024-02-01 19:58:35 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster 2024-02-01 19:58:35 | INFO | MainProcess | test_thread | [on_prem_rest_client.get_nodes_version] Node version in cluster 7.6.0-2090-enterprise 2024-02-01 19:58:35 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.123.206:8091 to cluster 2024-02-01 19:58:35 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.add_node] adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 2024-02-01 19:58:45 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] rebalance progress took 10.04 seconds 2024-02-01 19:58:45 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2024-02-01 19:58:59 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.123.157:8091 to cluster 2024-02-01 19:58:59 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.add_node] adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 2024-02-01 19:59:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] rebalance progress took 10.04 seconds 2024-02-01 19:59:09 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2024-02-01 19:59:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.get_nodes] Node 172.23.123.157 not part of cluster inactiveAdded 2024-02-01 19:59:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.get_nodes] Node 172.23.123.206 not part of cluster inactiveAdded 2024-02-01 19:59:23 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} 2024-02-01 19:59:33 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.rebalance] rebalance operation started 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client._rebalance_status_and_progress] {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed 2024-02-01 19:59:43 | INFO | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] Latest logs from UI on 172.23.123.207: 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846373819, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 059c690a5efe9ce1929858e303f61b32', 'serverTime': '2024-02-01T19:59:33.819Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706846373789, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T19:59:33.789Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846373773, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 059c690a5efe9ce1929858e303f61b32", 'serverTime': '2024-02-01T19:59:33.773Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706846373651, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T19:59:33.651Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846373647, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T19:59:33.647Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706846363832, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T19:59:23.832Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706846363646, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T19:59:23.646Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846363634, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T19:59:23.634Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706846363609, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T19:59:23.609Z'} 2024-02-01 19:59:43 | ERROR | MainProcess | Cluster_Thread | [on_prem_rest_client.print_UI_logs] {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706846360739, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T19:59:20.739Z'} [, , , , , ] Thu Feb 1 19:59:43 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force 2024-02-01 19:59:43 | INFO | MainProcess | Thread-138 | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 2024-02-01 19:59:43 | INFO | MainProcess | Thread-139 | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 2024-02-01 19:59:43 | INFO | MainProcess | Thread-137 | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 19:59:43 2024 2024-02-01 19:59:43 | INFO | MainProcess | Thread-140 | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 2024-02-01 19:59:44 | INFO | MainProcess | Thread-140 | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.160 as root 2024-02-01 19:59:44 | INFO | MainProcess | Thread-137 | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.207 as root 2024-02-01 19:59:44 | INFO | MainProcess | Thread-138 | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.206 as root 2024-02-01 19:59:44 | INFO | MainProcess | Thread-139 | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.123.157 as root 2024-02-01 19:59:44 | INFO | MainProcess | Thread-140 | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:59:44 | INFO | MainProcess | Thread-137 | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:59:44 | INFO | MainProcess | Thread-139 | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:59:44 | INFO | MainProcess | Thread-138 | [remote_util.extract_remote_info] os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True 2024-02-01 19:59:44 | INFO | MainProcess | Thread-137 | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 2024-02-01 19:59:44 | INFO | MainProcess | Thread-137 | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-1959-diag.zip 2024-02-01 19:59:44 | INFO | MainProcess | Thread-139 | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 2024-02-01 19:59:44 | INFO | MainProcess | Thread-139 | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-1959-diag.zip 2024-02-01 19:59:44 | INFO | MainProcess | Thread-140 | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 2024-02-01 19:59:44 | INFO | MainProcess | Thread-140 | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-1959-diag.zip 2024-02-01 19:59:44 | INFO | MainProcess | Thread-138 | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 2024-02-01 19:59:44 | INFO | MainProcess | Thread-138 | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-1959-diag.zip 2024-02-01 20:01:34 | INFO | MainProcess | Thread-139 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:01:34 | INFO | MainProcess | Thread-139 | [remote_util.get_file] found the file /root/172.23.123.157-20240201-1959-diag.zip Downloading zipped logs from 172.23.123.157 2024-02-01 20:01:34 | INFO | MainProcess | Thread-139 | [remote_util.execute_command_raw] running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-1959-diag.zip 2024-02-01 20:01:34 | INFO | MainProcess | Thread-139 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:01:35 | INFO | MainProcess | Thread-138 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:01:35 | INFO | MainProcess | Thread-138 | [remote_util.get_file] found the file /root/172.23.123.206-20240201-1959-diag.zip Downloading zipped logs from 172.23.123.206 2024-02-01 20:01:35 | INFO | MainProcess | Thread-138 | [remote_util.execute_command_raw] running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-1959-diag.zip 2024-02-01 20:01:35 | INFO | MainProcess | Thread-138 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:02:04 | INFO | MainProcess | Thread-140 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:02:04 | INFO | MainProcess | Thread-140 | [remote_util.get_file] found the file /root/172.23.123.160-20240201-1959-diag.zip Downloading zipped logs from 172.23.123.160 2024-02-01 20:02:05 | INFO | MainProcess | Thread-140 | [remote_util.execute_command_raw] running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-1959-diag.zip 2024-02-01 20:02:05 | INFO | MainProcess | Thread-140 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:02:34 | INFO | MainProcess | Thread-137 | [remote_util.execute_command_raw] command executed successfully with root 2024-02-01 20:02:35 | INFO | MainProcess | Thread-137 | [remote_util.get_file] found the file /root/172.23.123.207-20240201-1959-diag.zip Downloading zipped logs from 172.23.123.207 2024-02-01 20:02:35 | INFO | MainProcess | Thread-137 | [remote_util.execute_command_raw] running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-1959-diag.zip 2024-02-01 20:02:35 | INFO | MainProcess | Thread-137 | [remote_util.execute_command_raw] command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 1 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_1 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 141.748s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_2 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_drop_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_drop_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 2, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_2'} [2024-02-01 20:02:35,313] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:02:35,415] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:02:35,557] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:35,871] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:35,893] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:02:35,943] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:02:35,943] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #2 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:02:35,944] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:02:35,974] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:02:35,974] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:02:36,004] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:02:36,005] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:02:36,005] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:02:36,034] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:02:36,035] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3749999962747097, 'mem_free': 15785951232, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:02:36,035] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3374999947845936, 'mem_free': 15743242240, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:02:36,035] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.150000009685755, 'mem_free': 15564152832, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:02:36,036] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:02:36,039] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:02:36,212] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:02:36,354] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:36,623] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:36,628] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:02:36,728] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:02:36,868] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:37,178] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:37,183] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:02:37,323] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:02:37,467] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:37,778] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:37,787] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:02:37,922] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:02:38,065] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:38,387] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:44,252] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:02:44,253] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:02:44,253] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:02:44,287] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:02:44,287] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:02:44,318] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:02:44,319] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:02:53,929] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:02:53,934] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:02:54,075] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:02:54,219] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:54,541] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:54,584] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:02:54,723] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:02:54,867] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:55,181] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:55,247] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:02:55,383] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:02:55,383] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:02:56,584] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:02:56,585] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:02:56,603] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:02:56,604] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:02:56,611] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:02:56,611] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:02:56,660] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:02:56,664] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:02:56,763] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:02:56,905] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:02:57,222] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:02:57,284] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:02:57,285] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:02:57,344] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:02:57,518] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:02:57,519] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:02:57,530] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:02:57,531] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:03:02,535] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:03:02,549] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:02,550] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:03:02,550] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:02,609] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2786638 [2024-02-01 20:03:02,610] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:03:02,614] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:03:02,788] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:03:02,987] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:03,299] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:03,338] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:03:03,475] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:03:03,615] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:03,925] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:03,988] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:04,165] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:03:04,166] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:03:06,457] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:06,458] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:06,474] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:03:06,476] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:03:06,528] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:03:06,529] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:03:06,530] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:03:06,530] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:03:06,530] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:03:06,531] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:03:06,531] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:03:06,531] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:03:06,579] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:06,583] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:03:06,716] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:03:06,854] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:07,126] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:07,188] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:07,190] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:07,246] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:03:07,420] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:03:07,421] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:03:07,435] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:07,437] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:03:12,442] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:03:12,462] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:12,463] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:03:12,463] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:12,520] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3896792 [2024-02-01 20:03:12,521] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:03:12,525] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:03:12,697] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:03:12,897] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:13,215] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:13,254] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:03:13,392] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:03:13,531] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:13,806] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:13,869] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:14,047] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:03:14,048] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:03:16,257] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:16,258] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:16,275] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:03:16,277] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:03:16,285] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:16,286] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:03:16,342] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:16,346] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:03:16,493] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:03:16,639] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:16,915] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:16,977] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:16,978] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:17,035] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:03:17,211] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:03:17,212] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:03:17,224] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:17,225] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:03:22,229] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:03:22,244] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:22,245] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:03:22,245] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:22,302] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3247827 [2024-02-01 20:03:22,304] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:03:22,308] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:03:22,481] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:03:22,685] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:23,007] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:23,047] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:03:23,185] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:03:23,327] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:23,643] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:23,705] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:23,838] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:03:23,839] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:03:25,242] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:25,243] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:25,260] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:03:25,261] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:03:25,269] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:25,270] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:03:25,320] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:25,324] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:03:25,464] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:03:25,607] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:25,919] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:25,985] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:03:25,986] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:26,044] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:03:26,179] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:03:26,179] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:03:26,191] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:26,193] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:03:31,198] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:03:31,213] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:31,213] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:03:31,214] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:03:31,272] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3252466 [2024-02-01 20:03:31,273] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:03:31,279] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:03:31,292] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:03:31,303] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:03:31,311] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:03:34,316] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:03:40,324] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:03:40,355] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:03:40,356] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:03:40,362] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15826042880, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:03:40,365] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:03:40,366] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:03:40,375] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:03:40,376] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:03:40,408] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:03:40,409] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:03:40,540] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:03:40,543] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:03:40,716] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:03:40,868] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:41,169] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:41,172] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:03:41,241] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:41,242] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:03:41,258] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:41,272] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:41,288] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:03:41,348] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:03:41,349] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:03:41,354] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15798947840, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:03:41,358] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:03:41,359] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:03:41,367] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:03:41,368] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:03:41,523] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:03:41,529] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:03:41,675] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:03:41,814] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:42,130] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:42,132] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:03:42,203] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:42,205] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:03:42,222] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:42,236] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:42,251] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:03:42,302] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:03:42,303] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:03:42,308] - [task:166] INFO - {'uptime': '19', 'memoryTotal': 16747917312, 'memoryFree': 15774547968, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:03:42,311] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:03:42,312] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:03:42,319] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:03:42,320] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:03:42,462] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:03:42,465] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:03:42,608] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:03:42,745] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:43,058] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:43,059] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:03:43,126] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:43,127] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:03:43,143] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:43,156] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:43,172] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:03:43,229] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:03:43,230] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:03:43,236] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15746977792, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:03:43,240] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:03:43,241] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:03:43,248] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:03:43,249] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:03:43,403] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:03:43,407] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:03:43,582] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:03:43,718] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:03:43,991] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:03:43,993] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:03:44,065] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:03:44,067] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:03:44,083] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:44,097] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:03:44,114] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:03:44,167] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:03:44,227] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:03:44,228] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:03:44,409] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:03:49,413] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:03:49,461] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:03:49,502] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:03:50,173] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:03:50,206] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:04:00,245] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:04:00,246] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:04:14,527] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:04:14,562] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:04:24,604] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:04:24,604] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:04:39,072] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:04:39,073] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:04:39,105] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:04:49,237] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:04:59,264] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:04:59,282] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:04:59,282] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846689235, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 016219e37c4288722f258623e73b7827', 'serverTime': '2024-02-01T20:04:49.235Z'} [2024-02-01 20:04:59,282] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706846689208, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:04:49.208Z'} [2024-02-01 20:04:59,283] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706846689189, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 016219e37c4288722f258623e73b7827", 'serverTime': '2024-02-01T20:04:49.189Z'} [2024-02-01 20:04:59,283] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706846689053, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:04:49.053Z'} [2024-02-01 20:04:59,283] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846689048, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:04:49.048Z'} [2024-02-01 20:04:59,284] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706846679272, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:04:39.272Z'} [2024-02-01 20:04:59,284] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706846679048, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:04:39.048Z'} [2024-02-01 20:04:59,284] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846679034, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:04:39.034Z'} [2024-02-01 20:04:59,285] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706846679008, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:04:39.008Z'} [2024-02-01 20:04:59,285] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706846675853, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:04:35.853Z'} [, , , , , ] Thu Feb 1 20:04:59 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:04:59,297] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:04:59,301] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:04:59,311] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:04:59,315] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:04:59 2024 [2024-02-01 20:04:59,423] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:04:59,456] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:04:59,480] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:04:59,488] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:04:59,603] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:04:59,663] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:04:59,695] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:04:59,696] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:04:59,946] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:04:59,948] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2004-diag.zip [2024-02-01 20:04:59,987] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:04:59,989] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2004-diag.zip [2024-02-01 20:05:00,018] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:05:00,020] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2004-diag.zip [2024-02-01 20:05:00,025] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:05:00,027] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2004-diag.zip [2024-02-01 20:06:49,638] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:06:49,815] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2004-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:06:49,987] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2004-diag.zip [2024-02-01 20:06:50,035] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:06:50,849] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:06:51,029] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2004-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:06:51,204] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2004-diag.zip [2024-02-01 20:06:51,253] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:07:25,212] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:07:25,349] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2004-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:07:25,520] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2004-diag.zip [2024-02-01 20:07:25,573] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:07:50,296] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:07:50,473] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2004-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:07:50,639] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2004-diag.zip [2024-02-01 20:07:50,691] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 2 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_2 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 143.985s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_3 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_scan_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_scan_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 3, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_3'} [2024-02-01 20:07:50,707] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:07:50,841] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:07:50,982] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:07:51,306] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:07:51,327] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:07:51,374] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:07:51,374] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #3 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:07:51,375] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:07:51,403] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:07:51,404] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:07:51,434] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:07:51,434] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:07:51,435] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:07:51,463] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:07:51,463] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4999999888241291, 'mem_free': 15761317888, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:07:51,464] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4000000096857548, 'mem_free': 15757201408, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:07:51,464] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.912499994039536, 'mem_free': 15547662336, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:07:51,464] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:07:51,468] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:07:51,640] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:07:51,782] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:07:52,092] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:07:52,099] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:07:52,236] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:07:52,383] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:07:52,697] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:07:52,706] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:07:52,882] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:07:53,033] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:07:53,352] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:07:53,358] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:07:53,497] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:07:53,641] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:07:53,952] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:07:59,671] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:07:59,671] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:07:59,673] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:07:59,708] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:07:59,709] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:07:59,740] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:07:59,742] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:08:08,149] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:08:08,154] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:08:08,299] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:08:08,433] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:08,742] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:08,782] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:08:08,923] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:08:09,066] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:09,374] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:09,434] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:09,608] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:08:09,609] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:08:10,782] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:10,783] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:10,801] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:08:10,802] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:08:10,809] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:10,811] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:08:10,862] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:10,866] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:08:11,004] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:08:11,141] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:11,455] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:11,516] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:11,516] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:11,574] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:08:11,762] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:08:11,762] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:08:11,775] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:11,775] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:08:16,781] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:08:16,797] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:16,797] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:08:16,798] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:16,854] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2792137 [2024-02-01 20:08:16,854] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:08:16,858] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:08:16,998] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:08:17,218] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:17,489] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:17,530] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:08:17,708] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:08:17,852] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:18,173] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:18,232] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:18,413] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:08:18,413] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:08:20,665] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:20,666] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:20,681] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:08:20,682] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:08:20,729] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:08:20,730] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:08:20,731] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:08:20,731] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:08:20,731] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:08:20,731] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:08:20,732] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:08:20,732] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:08:20,782] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:20,786] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:08:20,930] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:08:21,067] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:21,384] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:21,443] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:21,444] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:21,501] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:08:21,668] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:08:21,668] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:08:21,681] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:21,681] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:08:26,686] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:08:26,705] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:26,705] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:08:26,706] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:26,765] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3902168 [2024-02-01 20:08:26,766] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:08:26,769] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:08:26,868] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:08:27,061] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:27,379] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:27,420] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:08:27,597] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:08:27,739] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:28,050] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:28,110] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:28,289] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:08:28,290] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:08:30,465] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:30,467] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:30,484] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:08:30,485] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:08:30,493] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:30,493] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:08:30,545] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:30,550] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:08:30,692] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:08:30,832] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:31,159] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:31,220] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:31,221] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:31,278] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:08:31,454] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:08:31,454] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:08:31,467] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:31,468] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:08:36,473] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:08:36,489] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:36,490] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:08:36,490] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:36,550] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3253137 [2024-02-01 20:08:36,550] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:08:36,555] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:08:36,696] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:08:36,904] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:37,186] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:37,227] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:08:37,397] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:08:37,546] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:37,855] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:37,916] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:38,095] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:08:38,096] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:08:39,250] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:39,251] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:39,265] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:08:39,266] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:08:39,272] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:39,273] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:08:39,321] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:39,325] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:08:39,456] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:08:39,581] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:39,851] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:39,913] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:08:39,913] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:39,970] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:08:40,147] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:08:40,147] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:08:40,160] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:40,160] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:08:45,164] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:08:45,180] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:45,180] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:08:45,181] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:08:45,243] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3257647 [2024-02-01 20:08:45,243] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:08:45,250] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:08:45,262] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:08:45,273] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:08:45,282] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:08:48,287] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:08:54,298] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:08:54,779] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:08:54,780] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:08:54,784] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15833952256, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:08:54,789] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:08:54,790] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:08:54,797] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:08:54,798] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:08:54,835] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:08:54,836] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:08:54,989] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:08:54,992] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:08:55,132] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:08:55,271] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:55,598] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:55,599] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:08:55,665] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:55,666] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:08:55,681] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:55,695] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:55,710] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:08:55,765] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:08:55,766] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:08:55,772] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15798034432, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:08:55,776] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:08:55,777] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:08:55,792] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:08:55,793] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:08:55,944] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:08:55,947] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:08:56,091] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:08:56,225] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:56,542] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:56,544] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:08:56,614] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:56,615] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:08:56,632] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:56,646] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:56,662] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:08:56,718] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:08:56,720] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:08:56,725] - [task:166] INFO - {'uptime': '23', 'memoryTotal': 16747917312, 'memoryFree': 15804321792, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:08:56,728] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:08:56,729] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:08:56,737] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:08:56,738] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:08:56,889] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:08:56,894] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:08:57,069] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:08:57,217] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:57,538] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:57,542] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:08:57,612] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:57,614] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:08:57,630] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:57,645] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:57,661] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:08:57,720] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:08:57,721] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:08:57,726] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15734165504, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:08:57,730] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:08:57,731] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:08:57,740] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:08:57,740] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:08:57,895] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:08:57,898] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:08:58,074] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:08:58,215] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:08:58,533] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:08:58,536] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:08:58,605] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:08:58,606] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:08:58,624] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:58,639] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:08:58,657] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:08:58,713] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:08:58,772] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:08:58,773] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:08:58,966] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:09:03,972] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:09:04,021] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:09:04,055] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:09:04,720] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:09:04,755] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:09:14,791] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:09:14,792] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:09:28,589] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:09:28,623] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:09:38,660] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:09:38,661] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:09:52,398] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:09:52,398] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:09:52,425] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:10:02,571] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:10:12,596] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:10:12,612] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:10:12,613] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847002570, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = b8ecadc9a5cd9b5c2f20f276071bc793', 'serverTime': '2024-02-01T20:10:02.570Z'} [2024-02-01 20:10:12,613] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706847002541, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:10:02.541Z'} [2024-02-01 20:10:12,613] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847002525, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = b8ecadc9a5cd9b5c2f20f276071bc793", 'serverTime': '2024-02-01T20:10:02.525Z'} [2024-02-01 20:10:12,614] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706847002382, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:10:02.382Z'} [2024-02-01 20:10:12,614] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847002377, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:10:02.377Z'} [2024-02-01 20:10:12,614] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706846992581, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:09:52.581Z'} [2024-02-01 20:10:12,614] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706846992377, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:09:52.377Z'} [2024-02-01 20:10:12,615] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706846992365, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:09:52.365Z'} [2024-02-01 20:10:12,615] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706846992348, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:09:52.348Z'} [2024-02-01 20:10:12,615] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706846989554, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:09:49.554Z'} [, , , , , ] Thu Feb 1 20:10:12 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:10:12,626] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:10:12,631] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:10:12,638] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:10:12 2024 [2024-02-01 20:10:12,642] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:10:12,750] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:10:12,775] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:10:12,778] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:10:12,789] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:10:12,934] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:10:12,979] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:10:12,986] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:10:12,988] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:10:13,286] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:10:13,288] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2010-diag.zip [2024-02-01 20:10:13,297] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:10:13,299] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2010-diag.zip [2024-02-01 20:10:13,320] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:10:13,322] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2010-diag.zip [2024-02-01 20:10:13,336] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:10:13,339] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2010-diag.zip [2024-02-01 20:12:00,965] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:12:01,049] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2010-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:12:01,227] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2010-diag.zip [2024-02-01 20:12:01,279] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:12:02,022] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:12:02,203] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2010-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:12:02,399] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2010-diag.zip [2024-02-01 20:12:02,449] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:12:36,338] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:12:36,516] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2010-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:12:36,691] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2010-diag.zip [2024-02-01 20:12:36,744] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:01,635] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:01,811] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2010-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:13:01,998] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2010-diag.zip [2024-02-01 20:13:02,047] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 3 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_3 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 141.919s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_4 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_kill_indexer=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_kill_indexer': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 4, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_4'} [2024-02-01 20:13:02,068] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:13:02,167] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:13:02,307] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:02,618] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:02,640] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:13:02,684] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:13:02,684] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #4 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:13:02,685] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:13:02,713] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:13:02,714] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:13:02,740] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:13:02,741] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:13:02,741] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:13:02,768] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:13:02,769] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4000000096857548, 'mem_free': 15771598848, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:13:02,769] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3625000081956387, 'mem_free': 15750332416, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:13:02,770] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.224999997764826, 'mem_free': 15574532096, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:13:02,771] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:13:02,775] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:13:02,912] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:13:03,052] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:03,360] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:03,367] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:13:03,471] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:13:03,620] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:03,892] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:03,899] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:13:04,040] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:13:04,182] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:04,495] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:04,502] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:13:04,606] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:13:04,747] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:05,070] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:11,098] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:13:11,099] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:13:11,100] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:13:11,134] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:13:11,134] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:13:11,163] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:13:11,164] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:13:19,224] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:13:19,230] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:13:19,369] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:13:19,509] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:19,826] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:19,868] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:13:20,009] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:13:20,153] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:20,417] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:20,479] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:20,572] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:13:20,573] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:13:21,785] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:21,786] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:21,802] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:13:21,802] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:13:21,810] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:21,810] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:13:21,863] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:21,867] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:13:22,004] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:13:22,141] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:22,453] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:22,516] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:22,517] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:22,575] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:13:22,660] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:13:22,660] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:13:22,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:22,672] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:13:27,674] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:13:27,690] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:27,691] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:13:27,691] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:27,747] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2797627 [2024-02-01 20:13:27,749] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:13:27,753] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:13:27,891] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:13:28,088] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:28,355] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:28,394] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:13:28,530] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:13:28,672] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:28,983] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:29,043] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:29,231] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:13:29,232] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:13:31,513] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:31,514] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:31,532] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:13:31,534] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:13:31,584] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:13:31,585] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:13:31,586] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:13:31,586] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:13:31,587] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:13:31,587] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:13:31,588] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:13:31,589] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:13:31,637] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:31,642] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:13:31,778] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:13:31,923] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:32,186] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:32,245] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:32,247] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:32,300] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:13:32,479] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:13:32,480] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:13:32,493] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:32,493] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:13:37,499] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:13:37,516] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:37,516] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:13:37,517] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:37,572] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3907543 [2024-02-01 20:13:37,573] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:13:37,578] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:13:37,720] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:13:37,913] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:38,226] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:38,270] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:13:38,416] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:13:38,556] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:38,873] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:38,937] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:39,116] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:13:39,117] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:13:41,499] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:41,500] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:41,515] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:13:41,516] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:13:41,523] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:41,523] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:13:41,572] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:41,577] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:13:41,716] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:13:41,859] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:42,129] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:42,186] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:42,187] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:42,247] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:13:42,380] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:13:42,381] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:13:42,392] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:42,393] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:13:47,398] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:13:47,414] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:47,416] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:13:47,417] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:47,474] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3258456 [2024-02-01 20:13:47,476] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:13:47,482] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:13:47,625] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:13:47,830] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:48,142] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:48,177] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:13:48,316] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:13:48,457] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:48,721] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:48,785] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:48,966] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:13:48,967] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:13:50,215] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:50,216] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:50,232] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:13:50,234] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:13:50,244] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:50,244] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:13:50,292] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:50,296] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:13:50,442] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:13:50,594] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:13:50,911] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:13:50,977] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:13:50,979] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:51,034] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:13:51,215] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:13:51,216] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:13:51,228] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:51,229] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:13:56,233] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:13:56,247] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:13:56,248] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:13:56,249] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:13:56,308] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3262832 [2024-02-01 20:13:56,309] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:13:56,315] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:13:56,326] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:13:56,340] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:13:56,348] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:13:59,353] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:14:05,364] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:14:06,205] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:14:06,206] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:14:06,211] - [task:166] INFO - {'uptime': '38', 'memoryTotal': 16747913216, 'memoryFree': 15820406784, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:14:06,215] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:14:06,216] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:14:06,223] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:14:06,224] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:14:06,258] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:14:06,259] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:14:06,389] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:14:06,392] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:14:06,535] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:14:06,685] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:14:07,015] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:14:07,016] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:14:07,082] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:14:07,082] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:14:07,097] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:07,111] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:07,126] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:14:07,178] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:14:07,180] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:14:07,185] - [task:166] INFO - {'uptime': '33', 'memoryTotal': 16747913216, 'memoryFree': 15782920192, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:14:07,190] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:14:07,192] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:14:07,201] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:14:07,201] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:14:07,331] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:14:07,335] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:14:07,475] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:14:07,619] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:14:07,934] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:14:07,937] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:14:08,003] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:14:08,005] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:14:08,019] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:08,034] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:08,050] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:14:08,104] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:14:08,105] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:14:08,110] - [task:166] INFO - {'uptime': '23', 'memoryTotal': 16747917312, 'memoryFree': 15797178368, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:14:08,114] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:14:08,115] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:14:08,123] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:14:08,124] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:14:08,252] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:14:08,257] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:14:08,429] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:14:08,575] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:14:08,842] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:14:08,844] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:14:08,910] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:14:08,911] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:14:08,927] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:08,941] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:08,957] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:14:09,012] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:14:09,013] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:14:09,018] - [task:166] INFO - {'uptime': '13', 'memoryTotal': 16747917312, 'memoryFree': 15745896448, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:14:09,022] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:14:09,023] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:14:09,031] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:14:09,031] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:14:09,167] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:14:09,172] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:14:09,316] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:14:09,455] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:14:09,765] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:14:09,766] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:14:09,836] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:14:09,837] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:14:09,852] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:09,864] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:14:09,879] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:14:09,926] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:14:09,989] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:14:09,991] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:14:10,172] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:14:15,177] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:14:15,223] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:14:15,253] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:14:15,932] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:14:15,962] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:14:25,998] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 20:14:25,998] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:14:39,789] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:14:39,820] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:14:49,856] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:14:49,857] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:15:03,663] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:15:03,664] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:15:03,694] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:15:13,837] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:15:23,859] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:15:23,878] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:15:23,878] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847313835, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 70f5a90b44c6006d56f54ade01981155', 'serverTime': '2024-02-01T20:15:13.835Z'} [2024-02-01 20:15:23,878] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706847313805, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:15:13.805Z'} [2024-02-01 20:15:23,879] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847313789, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 70f5a90b44c6006d56f54ade01981155", 'serverTime': '2024-02-01T20:15:13.789Z'} [2024-02-01 20:15:23,879] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706847313644, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:15:13.644Z'} [2024-02-01 20:15:23,880] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847313639, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:15:13.639Z'} [2024-02-01 20:15:23,880] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706847303839, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:15:03.839Z'} [2024-02-01 20:15:23,880] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706847303639, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:15:03.639Z'} [2024-02-01 20:15:23,881] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847303626, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:15:03.626Z'} [2024-02-01 20:15:23,881] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706847303606, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:15:03.606Z'} [2024-02-01 20:15:23,881] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706847300725, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:15:00.725Z'} [, , , , , ] Thu Feb 1 20:15:23 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:15:23,893] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:15:23,895] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:15:23 2024 [2024-02-01 20:15:23,906] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:15:23,911] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:15:24,049] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:15:24,077] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:15:24,079] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:15:24,085] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:15:24,201] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:15:24,281] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:15:24,282] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:15:24,288] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:15:24,552] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:15:24,554] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2015-diag.zip [2024-02-01 20:15:24,615] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:15:24,616] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2015-diag.zip [2024-02-01 20:15:24,625] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:15:24,629] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2015-diag.zip [2024-02-01 20:15:24,630] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:15:24,634] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2015-diag.zip [2024-02-01 20:17:14,077] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:17:14,256] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2015-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:17:14,459] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2015-diag.zip [2024-02-01 20:17:14,511] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:17:15,449] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:17:15,627] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2015-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:17:15,824] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2015-diag.zip [2024-02-01 20:17:15,874] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:17:50,134] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:17:50,312] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2015-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:17:50,478] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2015-diag.zip [2024-02-01 20:17:50,528] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:14,838] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:15,016] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2015-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:18:15,212] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2015-diag.zip [2024-02-01 20:18:15,263] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 4 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_4 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 141.822s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_5 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_kill_memcached=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_kill_memcached': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 5, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_5'} [2024-02-01 20:18:15,300] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:18:15,400] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:18:15,540] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:15,858] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:15,881] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:18:15,928] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:18:15,929] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #5 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:18:15,929] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:18:15,959] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:18:15,959] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:18:15,988] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:18:15,989] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:18:15,989] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:18:16,020] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:18:16,020] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3875000029802322, 'mem_free': 15790534656, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:18:16,021] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.475000012665987, 'mem_free': 15762337792, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:18:16,021] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.0625, 'mem_free': 15592374272, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:18:16,022] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:18:16,025] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:18:16,164] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:18:16,304] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:16,574] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:16,582] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:18:16,682] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:18:16,822] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:17,135] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:17,143] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:18:17,245] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:18:17,391] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:17,707] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:17,714] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:18:17,849] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:18:17,992] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:18,324] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:24,375] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:18:24,375] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:18:24,377] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:18:24,411] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:18:24,411] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:18:24,442] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:18:24,443] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:18:46,736] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:18:46,764] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:18:46,912] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:18:47,133] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:47,398] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:47,436] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:18:47,537] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:18:47,675] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:47,994] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:48,057] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:18:48,238] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:18:48,238] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:18:49,588] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:49,589] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:18:49,604] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:18:49,605] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:18:49,614] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:49,615] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:18:49,662] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:49,669] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:18:49,841] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:18:49,983] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:50,298] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:50,362] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:18:50,362] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:18:50,417] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:18:50,598] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:18:50,599] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:18:50,612] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:50,613] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:18:55,618] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:18:55,633] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:55,635] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:18:55,635] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:18:55,690] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2803133 [2024-02-01 20:18:55,690] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:18:55,695] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:18:55,793] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:18:55,982] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:56,242] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:56,284] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:18:56,425] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:18:56,556] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:18:56,817] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:18:56,879] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:18:57,055] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:18:57,056] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:18:59,258] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:59,259] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:18:59,321] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:18:59,323] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:18:59,379] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:18:59,379] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:18:59,380] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:18:59,380] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:18:59,381] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:18:59,381] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:18:59,382] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:18:59,382] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:18:59,431] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:18:59,438] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:18:59,611] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:19:02,124] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:02,439] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:02,499] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:19:02,500] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:02,558] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:19:02,686] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:19:02,687] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:19:02,701] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:02,701] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:19:07,706] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:19:07,723] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:07,724] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:19:07,724] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:07,781] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3912929 [2024-02-01 20:19:07,782] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:19:07,786] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:19:07,931] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:19:08,131] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:08,450] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:08,490] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:19:08,628] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:19:08,767] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:09,084] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:09,145] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:19:09,322] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:19:09,322] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:19:11,486] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:11,487] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:11,505] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:19:11,507] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:19:11,514] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:11,515] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:19:11,568] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:11,574] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:19:11,719] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:19:11,859] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:12,183] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:12,248] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:19:12,249] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:12,310] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:19:12,495] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:19:12,495] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:19:12,509] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:12,509] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:19:17,515] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:19:17,530] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:17,531] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:19:17,531] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:17,587] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3263764 [2024-02-01 20:19:17,588] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:19:17,592] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:19:17,731] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:19:17,924] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:18,225] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:18,263] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:19:18,399] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:19:18,539] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:18,810] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:18,868] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:19:19,052] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:19:19,052] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:19:20,422] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:20,423] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:20,441] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:19:20,442] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:19:20,450] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:20,452] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:19:20,504] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:20,509] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:19:20,649] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:19:20,789] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:21,119] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:21,180] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:19:21,181] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:21,238] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:19:21,418] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:19:21,419] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:19:21,430] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:21,432] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:19:26,436] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:19:26,449] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:26,449] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:19:26,450] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:19:26,505] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3268027 [2024-02-01 20:19:26,505] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:19:26,509] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:19:26,517] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:19:26,525] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:19:26,531] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:19:29,537] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:19:35,547] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:19:36,097] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:19:36,098] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:19:36,103] - [task:166] INFO - {'uptime': '43', 'memoryTotal': 16747913216, 'memoryFree': 15807377408, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:19:36,106] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:19:36,107] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:19:36,115] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:19:36,116] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:19:36,147] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:19:36,148] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:19:36,301] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:19:36,313] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:19:36,485] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:19:36,629] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:36,927] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:36,929] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:19:37,001] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:37,002] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:19:37,017] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:37,031] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:37,047] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:19:37,105] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:19:37,106] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:19:37,111] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15777726464, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:19:37,116] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:19:37,117] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:19:37,126] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:19:37,127] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:19:37,277] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:19:37,282] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:19:37,459] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:19:37,604] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:37,917] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:37,919] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:19:37,994] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:37,995] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:19:38,012] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:38,027] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:38,044] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:19:38,101] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:19:38,102] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:19:38,108] - [task:166] INFO - {'uptime': '23', 'memoryTotal': 16747917312, 'memoryFree': 15773937664, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:19:38,113] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:19:38,114] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:19:38,124] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:19:38,124] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:19:38,279] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:19:38,285] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:19:38,388] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:19:38,529] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:38,837] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:38,857] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:19:38,911] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:38,912] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:19:38,929] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:38,944] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:38,960] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:19:39,016] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:19:39,018] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:19:39,023] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15761289216, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:19:39,026] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:19:39,027] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:19:39,035] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:19:39,035] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:19:39,202] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:19:39,206] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:19:39,381] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:19:40,543] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:19:40,862] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:19:40,865] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:19:40,940] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:19:40,941] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:19:40,957] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:40,971] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:19:40,988] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:19:41,039] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:19:41,103] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:19:41,138] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:19:41,334] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:19:46,339] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:19:46,389] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:19:46,421] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:19:47,045] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:19:47,078] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:19:57,118] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:19:57,118] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:20:11,611] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:20:11,652] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:20:21,695] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:20:21,695] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:20:36,296] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:20:36,297] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:20:36,331] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:20:46,467] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:20:56,495] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:20:56,517] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:20:56,517] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847646465, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = e34bcded721a6515d9bfe193d3670286', 'serverTime': '2024-02-01T20:20:46.465Z'} [2024-02-01 20:20:56,517] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706847646437, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:20:46.437Z'} [2024-02-01 20:20:56,518] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847646419, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = e34bcded721a6515d9bfe193d3670286", 'serverTime': '2024-02-01T20:20:46.419Z'} [2024-02-01 20:20:56,518] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706847646276, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:20:46.276Z'} [2024-02-01 20:20:56,519] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847646271, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:20:46.271Z'} [2024-02-01 20:20:56,519] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706847636488, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:20:36.488Z'} [2024-02-01 20:20:56,519] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706847636271, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:20:36.271Z'} [2024-02-01 20:20:56,520] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847636258, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:20:36.258Z'} [2024-02-01 20:20:56,520] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706847636237, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:20:36.237Z'} [2024-02-01 20:20:56,521] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706847632890, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:20:32.890Z'} [, , , , , ] Thu Feb 1 20:20:56 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:20:56,578] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:20:56,583] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:20:56,592] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:20:56,596] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:20:56 2024 [2024-02-01 20:20:56,748] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:20:56,768] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:20:56,777] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:20:56,783] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:20:56,957] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:20:56,990] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:20:56,990] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:20:57,013] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:20:57,302] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:20:57,305] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2020-diag.zip [2024-02-01 20:20:57,322] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:20:57,324] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2020-diag.zip [2024-02-01 20:20:57,334] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:20:57,336] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:20:57,340] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2020-diag.zip Collecting logs from 172.23.123.206 [2024-02-01 20:20:57,341] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2020-diag.zip [2024-02-01 20:22:47,011] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:22:47,186] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2020-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:22:47,426] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2020-diag.zip [2024-02-01 20:22:47,475] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:22:47,998] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:22:48,181] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2020-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:22:48,415] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2020-diag.zip [2024-02-01 20:22:48,465] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:23:17,555] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:23:17,687] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2020-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:23:17,890] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2020-diag.zip [2024-02-01 20:23:17,939] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:23:47,392] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:23:47,572] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2020-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:23:47,699] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2020-diag.zip [2024-02-01 20:23:47,706] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 5 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_5 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 161.275s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_6 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_full,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_full', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 6, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_6'} [2024-02-01 20:23:47,787] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:23:47,917] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:23:48,107] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:23:48,414] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:23:48,439] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:23:48,482] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:23:48,482] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #6 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:23:48,483] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:23:48,512] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:23:48,513] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:23:48,543] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:23:48,543] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:23:48,544] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:23:48,574] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:23:48,574] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3500000014901161, 'mem_free': 15782342656, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:23:48,575] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3624999895691872, 'mem_free': 15758950400, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:23:48,575] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.974999990314245, 'mem_free': 15542239232, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:23:48,576] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:23:48,581] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:23:48,680] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:23:48,823] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:23:49,142] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:23:49,147] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:23:49,249] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:23:49,386] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:23:49,714] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:23:49,721] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:23:49,824] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:23:49,965] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:23:50,247] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:23:50,255] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:23:50,390] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:23:50,532] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:23:50,859] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:23:57,019] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:23:57,020] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:23:57,046] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:23:57,085] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:23:57,086] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:23:57,119] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:23:57,121] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:24:20,858] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:24:20,866] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:24:21,016] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:24:21,221] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:21,490] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:21,533] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:24:21,676] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:24:21,817] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:22,136] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:22,196] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:22,373] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:24:22,374] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:24:23,597] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:23,598] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:23,612] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:24:23,614] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:24:23,620] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:23,622] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:24:23,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:23,675] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:24:23,812] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:24:23,948] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:24,258] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:24,319] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:24,320] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:24,384] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:24:24,555] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:24:24,556] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:24:24,569] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:24,570] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:24:29,575] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:24:29,589] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:29,589] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:24:29,590] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:29,647] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2808635 [2024-02-01 20:24:29,648] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:24:29,653] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:24:29,794] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:24:29,990] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:30,308] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:30,347] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:24:30,523] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:24:30,687] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:30,968] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:31,035] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:31,212] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:24:31,213] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:24:33,520] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:33,520] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:33,537] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:24:33,538] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:24:33,589] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:24:33,591] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:24:33,591] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:24:33,592] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:24:33,592] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:24:33,593] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:24:33,593] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:24:33,593] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:24:33,642] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:33,645] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:24:33,814] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:24:33,957] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:34,228] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:34,290] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:34,291] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:34,348] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:24:34,525] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:24:34,526] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:24:34,540] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:34,540] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:24:39,545] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:24:39,562] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:39,564] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:24:39,565] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:39,623] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3918323 [2024-02-01 20:24:39,624] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:24:39,630] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:24:39,817] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:24:40,020] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:40,334] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:40,377] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:24:40,553] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:24:40,690] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:41,002] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:41,066] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:41,246] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:24:41,247] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:24:43,575] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:43,576] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:43,592] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:24:43,593] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:24:43,600] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:43,602] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:24:43,653] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:43,657] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:24:43,828] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:24:43,976] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:44,289] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:44,349] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:44,349] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:44,405] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:24:44,538] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:24:44,539] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:24:44,554] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:44,554] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:24:49,560] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:24:49,574] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:49,575] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:24:49,575] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:49,633] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3269077 [2024-02-01 20:24:49,635] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:24:49,639] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:24:49,817] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:24:50,034] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:50,351] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:50,393] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:24:50,532] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:24:50,673] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:50,987] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:51,046] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:51,230] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:24:51,231] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:24:52,439] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:52,441] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:52,459] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:24:52,460] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:24:52,468] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:52,469] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:24:52,517] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:52,522] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:24:52,660] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:24:52,803] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:24:53,118] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:24:53,181] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:24:53,182] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:53,242] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:24:53,420] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:24:53,421] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:24:53,436] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:53,436] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:24:58,441] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:24:58,458] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:24:58,459] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:24:58,459] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:24:58,515] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3273211 [2024-02-01 20:24:58,516] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:24:58,521] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:24:58,604] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:24:58,614] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:24:58,623] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:25:01,628] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:25:07,635] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:25:07,808] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:25:07,811] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:25:07,817] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15821791232, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:25:07,821] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:25:07,821] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:25:07,830] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:25:07,830] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:25:07,867] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:25:07,868] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:25:08,016] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:25:08,020] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:25:08,160] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:25:08,296] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:25:08,614] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:25:08,615] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:25:08,681] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:25:08,681] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:25:08,698] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:08,713] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:08,729] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:25:08,786] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:25:08,787] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:25:08,794] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15776129024, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:25:08,798] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:25:08,799] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:25:08,807] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:25:08,807] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:25:08,953] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:25:08,958] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:25:09,101] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:25:09,248] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:25:09,569] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:25:09,571] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:25:09,641] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:25:09,641] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:25:09,659] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:09,674] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:09,690] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:25:09,742] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:25:09,743] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:25:09,748] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15788859392, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:25:09,752] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:25:09,753] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:25:09,762] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:25:09,763] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:25:09,908] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:25:09,912] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:25:10,089] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:25:10,227] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:25:10,503] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:25:10,506] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:25:10,576] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:25:10,577] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:25:10,594] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:10,608] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:10,624] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:25:10,685] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:25:10,686] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:25:10,691] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15735697408, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:25:10,695] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:25:10,696] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:25:10,703] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:25:10,704] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:25:10,850] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:25:10,853] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:25:10,996] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:25:11,142] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:25:11,455] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:25:11,457] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:25:11,526] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:25:11,527] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:25:11,545] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:11,560] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:25:11,577] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:25:11,628] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:25:11,689] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:25:11,706] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:25:11,902] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:25:16,908] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:25:16,955] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:25:16,987] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:25:17,635] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:25:17,668] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:25:27,715] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.05 seconds [2024-02-01 20:25:27,716] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:25:41,611] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:25:41,644] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:25:51,684] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:25:51,685] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:26:05,720] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:26:05,721] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:26:05,756] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:26:15,886] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:26:25,914] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:26:25,937] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:26:25,938] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847975884, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 0c915e43f50db7edc37a83deabd643f0', 'serverTime': '2024-02-01T20:26:15.884Z'} [2024-02-01 20:26:25,939] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706847975851, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:26:15.851Z'} [2024-02-01 20:26:25,939] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706847975838, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 0c915e43f50db7edc37a83deabd643f0", 'serverTime': '2024-02-01T20:26:15.838Z'} [2024-02-01 20:26:25,940] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706847975697, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:26:15.697Z'} [2024-02-01 20:26:25,940] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847975692, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:26:15.692Z'} [2024-02-01 20:26:25,941] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706847965895, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:26:05.895Z'} [2024-02-01 20:26:25,941] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706847965694, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:26:05.694Z'} [2024-02-01 20:26:25,942] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706847965679, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:26:05.679Z'} [2024-02-01 20:26:25,942] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706847965661, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:26:05.661Z'} [2024-02-01 20:26:25,942] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706847962629, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:26:02.629Z'} [, , , , , ] Thu Feb 1 20:26:25 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [, , , ] Thu Feb 1 20:26:25 2024 [2024-02-01 20:26:25,978] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:26:25,984] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:26:25,989] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:26:25,992] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:26:26,132] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:26:26,136] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:26:26,140] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:26:26,175] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:26:26,353] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:26:26,359] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:26:26,385] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:26:26,389] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:26:26,695] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:26:26,700] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2026-diag.zip [2024-02-01 20:26:26,702] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:26:26,708] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2026-diag.zip [2024-02-01 20:26:26,721] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:26:26,723] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2026-diag.zip [2024-02-01 20:26:26,729] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:26:26,731] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2026-diag.zip [2024-02-01 20:28:14,605] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:28:14,784] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2026-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:28:15,037] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2026-diag.zip [2024-02-01 20:28:15,087] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:28:15,658] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:28:15,844] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2026-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:28:16,091] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2026-diag.zip [2024-02-01 20:28:16,146] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:28:54,863] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:28:54,996] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2026-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:28:55,194] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2026-diag.zip [2024-02-01 20:28:55,243] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:15,638] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:15,775] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2026-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:29:15,986] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2026-diag.zip [2024-02-01 20:29:16,035] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 6 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_6 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 158.183s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_7 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=restart_couchbase,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'restart_couchbase', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 7, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_7'} [2024-02-01 20:29:16,075] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:29:16,216] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:29:16,355] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:16,674] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:16,694] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:29:16,735] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:29:16,736] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #7 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:29:16,736] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:29:16,768] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:29:16,768] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:29:16,798] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:29:16,798] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:29:16,799] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:29:16,828] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:29:16,828] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3750000149011612, 'mem_free': 15754149888, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:29:16,828] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.5000000074505806, 'mem_free': 15728713728, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:29:16,829] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.512500002980232, 'mem_free': 15580647424, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:29:16,829] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:29:16,835] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:29:16,935] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:29:17,077] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:17,343] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:17,350] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:29:17,489] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:29:17,624] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:17,887] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:17,894] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:29:17,994] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:29:18,128] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:18,451] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:18,457] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:29:18,593] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:29:18,729] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:19,007] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:25,234] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:29:25,234] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:29:25,238] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:29:25,274] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:29:25,274] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:29:25,304] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:29:25,305] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:29:35,044] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:29:35,047] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:29:35,184] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:29:35,323] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:35,649] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:35,695] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:29:35,870] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:29:36,017] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:36,329] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:36,388] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:36,567] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:29:36,568] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:29:37,793] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:37,794] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:37,811] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:29:37,811] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:29:37,819] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:37,820] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:29:37,867] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:37,870] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:29:38,016] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:29:38,155] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:38,427] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:38,485] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:38,487] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:38,542] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:29:38,721] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:29:38,722] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:29:38,734] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:38,734] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:29:43,742] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:29:43,756] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:43,757] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:29:43,757] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:43,814] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2814120 [2024-02-01 20:29:43,815] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:29:43,819] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:29:43,957] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:29:44,162] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:44,472] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:44,513] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:29:44,649] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:29:44,785] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:45,095] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:45,161] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:45,344] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:29:45,344] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:29:47,664] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:47,665] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:47,683] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:29:47,684] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:29:47,738] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:29:47,739] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:29:47,739] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:29:47,739] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:29:47,740] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:29:47,740] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:29:47,741] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:29:47,741] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:29:47,789] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:47,793] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:29:47,967] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:29:48,109] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:48,424] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:48,485] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:48,486] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:48,544] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:29:48,725] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:29:48,725] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:29:48,739] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:48,739] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:29:53,745] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:29:53,763] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:53,763] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:29:53,764] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:53,820] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3923706 [2024-02-01 20:29:53,820] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:29:53,825] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:29:53,997] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:29:54,193] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:54,506] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:54,546] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:29:54,687] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:29:54,831] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:55,106] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:55,167] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:55,350] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:29:55,350] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:29:57,580] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:57,582] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:57,641] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:29:57,642] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:29:57,649] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:57,649] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:29:57,700] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:57,705] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:29:57,847] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:29:57,987] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:29:58,305] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:29:58,368] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:29:58,369] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:29:58,432] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:29:58,610] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:29:58,611] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:29:58,622] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:29:58,623] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:30:03,626] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:30:03,638] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:03,638] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:30:03,638] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:30:03,694] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3274378 [2024-02-01 20:30:03,694] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:30:03,697] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:30:03,828] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:30:04,033] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:04,298] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:04,341] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:30:04,480] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:30:04,622] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:04,938] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:05,000] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:30:05,137] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:30:05,138] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:30:06,441] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:06,442] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:30:06,456] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:30:06,456] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:30:06,461] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:06,461] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:30:06,508] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:06,510] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:30:06,607] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:30:06,746] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:07,014] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:07,075] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:30:07,076] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:30:07,134] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:30:07,266] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:30:07,266] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:30:07,279] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:07,280] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:30:12,284] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:30:12,298] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:12,299] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:30:12,300] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:30:12,355] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3278401 [2024-02-01 20:30:12,356] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:30:12,361] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:30:12,375] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:30:12,385] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:30:12,395] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:30:15,399] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:30:22,854] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:30:23,345] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:30:23,346] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:30:23,352] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15809912832, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:30:23,355] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:30:23,356] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:30:23,364] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:30:23,365] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:30:23,400] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:30:23,401] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:30:23,551] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:30:23,554] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:30:23,729] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:30:23,869] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:24,214] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:24,216] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:30:24,283] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:24,284] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:30:24,301] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:24,315] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:24,331] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:30:24,382] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:30:24,383] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:30:24,389] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15782014976, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:30:24,393] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:30:24,394] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:30:24,403] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:30:24,403] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:30:24,552] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:30:24,555] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:30:24,730] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:30:24,866] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:25,185] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:25,186] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:30:25,253] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:25,254] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:30:25,270] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:25,284] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:25,299] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:30:25,353] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:30:25,354] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:30:25,359] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15793901568, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:30:25,363] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:30:25,364] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:30:25,371] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:30:25,371] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:30:25,536] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:30:25,540] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:30:25,712] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:30:25,860] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:26,178] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:26,180] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:30:26,253] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:26,253] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:30:26,270] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:26,286] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:26,301] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:30:26,353] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:30:26,354] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:30:26,359] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15741194240, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:30:26,362] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:30:26,363] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:30:26,371] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:30:26,372] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:30:26,531] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:30:26,535] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:30:26,680] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:30:26,825] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:30:27,136] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:30:27,138] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:30:27,205] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:30:27,206] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:30:27,222] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:27,237] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:30:27,256] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:30:27,310] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:30:27,366] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:30:27,367] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:30:27,571] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:30:32,576] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:30:32,623] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:30:32,656] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:30:33,316] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:30:33,346] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:30:43,382] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:30:43,383] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:30:58,281] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:30:58,315] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:31:08,350] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 20:31:08,351] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:31:22,919] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:31:22,920] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:31:22,952] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:31:33,082] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:31:43,107] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:31:43,127] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:31:43,127] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848293081, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = c1d284429d4a0eba043c8ae119ac12b6', 'serverTime': '2024-02-01T20:31:33.081Z'} [2024-02-01 20:31:43,128] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706848293051, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:31:33.051Z'} [2024-02-01 20:31:43,128] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848293035, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = c1d284429d4a0eba043c8ae119ac12b6", 'serverTime': '2024-02-01T20:31:33.035Z'} [2024-02-01 20:31:43,129] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706848292899, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:31:32.899Z'} [2024-02-01 20:31:43,129] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848292895, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:31:32.895Z'} [2024-02-01 20:31:43,129] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706848283115, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:31:23.115Z'} [2024-02-01 20:31:43,129] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706848282896, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:31:22.896Z'} [2024-02-01 20:31:43,130] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848282881, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:31:22.881Z'} [2024-02-01 20:31:43,130] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706848282859, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:31:22.859Z'} [2024-02-01 20:31:43,130] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706848279575, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:31:19.575Z'} [, , , , , ] Thu Feb 1 20:31:43 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:31:43,143] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:31:43,146] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:31:43 2024 [2024-02-01 20:31:43,156] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:31:43,162] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:31:43,294] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:31:43,334] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:31:43,336] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:31:43,341] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:31:43,443] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:31:43,541] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:31:43,542] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:31:43,563] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:31:43,783] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:31:43,786] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2031-diag.zip [2024-02-01 20:31:43,840] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:31:43,842] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2031-diag.zip [2024-02-01 20:31:43,862] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:31:43,864] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2031-diag.zip [2024-02-01 20:31:43,879] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:31:43,881] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2031-diag.zip [2024-02-01 20:33:33,610] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:33:33,789] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2031-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:33:34,033] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2031-diag.zip [2024-02-01 20:33:34,083] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:33:34,709] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:33:34,885] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2031-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:33:35,161] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2031-diag.zip [2024-02-01 20:33:35,213] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:04,345] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:04,478] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2031-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:34:04,681] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2031-diag.zip [2024-02-01 20:34:04,731] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:34,166] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:34,352] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2031-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:34:34,598] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2031-diag.zip [2024-02-01 20:34:34,648] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 7 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_7 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 147.066s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_8 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=net_packet_loss,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'net_packet_loss', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 8, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_8'} [2024-02-01 20:34:34,669] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:34:34,772] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:34:34,911] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:35,230] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:35,252] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:34:35,297] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:34:35,297] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #8 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:34:35,298] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:34:35,327] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:34:35,328] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:34:35,358] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:34:35,359] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:34:35,359] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:34:35,387] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:34:35,387] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3500000014901161, 'mem_free': 15788314624, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:34:35,388] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3624999895691872, 'mem_free': 15762345984, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:34:35,388] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.250000007450581, 'mem_free': 15579598848, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:34:35,388] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:34:35,393] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:34:35,532] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:34:35,674] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:35,996] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:36,003] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:34:36,138] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:34:36,276] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:36,588] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:36,593] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:34:36,730] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:34:36,874] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:37,185] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:37,191] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:34:37,292] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:34:37,435] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:37,754] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:44,101] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:34:44,102] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:34:44,106] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:34:44,144] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:34:44,145] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:34:44,180] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:34:44,181] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:34:54,506] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:34:54,511] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:34:54,652] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:34:54,858] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:55,179] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:55,221] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:34:55,360] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:34:55,508] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:55,818] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:55,876] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:34:56,001] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:34:56,001] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:34:57,390] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:57,391] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:34:57,406] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:34:57,408] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:34:57,414] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:57,415] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:34:57,466] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:57,470] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:34:57,608] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:34:57,748] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:34:58,019] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:34:58,085] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:34:58,086] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:34:58,144] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:34:58,315] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:34:58,315] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:34:58,330] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:34:58,331] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:35:03,337] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:35:03,354] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:03,354] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:35:03,355] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:03,415] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2819625 [2024-02-01 20:35:03,417] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:35:03,421] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:35:03,595] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:35:03,801] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:04,068] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:04,109] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:35:04,253] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:35:04,392] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:04,700] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:04,761] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:04,937] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:35:04,938] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:35:07,258] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:07,259] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:07,274] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:35:07,276] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:35:07,328] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:35:07,330] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:35:07,330] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:35:07,330] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:35:07,331] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:35:07,331] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:35:07,332] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:35:07,332] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:35:07,382] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:07,386] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:35:07,563] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:35:07,706] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:07,980] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:08,043] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:08,044] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:08,101] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:35:08,272] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:35:08,272] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:35:08,285] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:08,286] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:35:13,291] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:35:13,309] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:13,310] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:35:13,310] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:13,368] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3929094 [2024-02-01 20:35:13,370] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:35:13,373] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:35:13,544] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:35:13,745] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:14,066] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:14,107] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:35:14,251] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:35:14,400] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:14,714] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:14,778] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:14,958] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:35:14,959] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:35:17,270] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:17,272] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:17,288] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:35:17,289] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:35:17,299] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:17,299] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:35:17,348] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:17,353] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:35:17,451] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:35:17,594] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:17,905] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:17,966] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:17,969] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:18,027] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:35:18,201] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:35:18,201] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:35:18,213] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:18,213] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:35:23,220] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:35:23,234] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:23,235] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:35:23,235] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:23,295] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3279684 [2024-02-01 20:35:23,295] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:35:23,299] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:35:23,403] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:35:23,603] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:23,915] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:23,959] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:35:24,103] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:35:24,242] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:24,550] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:24,608] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:24,789] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:35:24,790] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:35:26,044] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:26,045] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:26,062] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:35:26,063] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:35:26,070] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:26,070] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:35:26,126] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:26,131] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:35:26,277] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:35:26,415] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:26,725] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:26,789] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:35:26,791] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:26,845] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:35:26,985] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:35:26,986] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:35:26,998] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:26,999] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:35:32,003] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:35:32,019] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:32,020] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:35:32,020] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:35:32,079] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3283578 [2024-02-01 20:35:32,079] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:35:32,086] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:35:32,100] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:35:32,111] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:35:32,120] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:35:35,125] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:35:41,134] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:35:41,212] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:35:41,213] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:35:41,219] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15813730304, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:35:41,222] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:35:41,223] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:35:41,230] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:35:41,231] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:35:41,267] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:35:41,267] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:35:41,418] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:35:41,422] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:35:41,593] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:35:41,736] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:42,063] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:42,066] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:35:42,139] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:42,140] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:35:42,156] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:42,171] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:42,188] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:35:42,246] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:35:42,247] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:35:42,252] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15779528704, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:35:42,256] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:35:42,257] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:35:42,265] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:35:42,266] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:35:42,418] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:35:42,422] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:35:42,599] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:35:42,736] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:43,044] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:43,046] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:35:43,118] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:43,118] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:35:43,133] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:43,146] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:43,160] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:35:43,211] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:35:43,211] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:35:43,214] - [task:166] INFO - {'uptime': '19', 'memoryTotal': 16747917312, 'memoryFree': 15793475584, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:35:43,216] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:35:43,217] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:35:43,223] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:35:43,223] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:35:43,368] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:35:43,370] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:35:43,503] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:35:43,629] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:43,941] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:43,943] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:35:44,010] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:44,011] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:35:44,031] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:44,046] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:44,064] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:35:44,120] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:35:44,121] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:35:44,126] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15740485632, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:35:44,129] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:35:44,130] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:35:44,139] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:35:44,139] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:35:44,292] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:35:44,296] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:35:44,469] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:35:44,618] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:35:44,934] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:35:44,936] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:35:45,007] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:35:45,008] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:35:45,023] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:45,038] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:35:45,057] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:35:45,111] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:35:45,169] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:35:45,170] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:35:45,372] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:35:50,374] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:35:50,422] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:35:50,455] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:35:51,117] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:35:51,148] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:36:01,190] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:36:01,191] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:36:15,600] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:36:15,634] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:36:25,674] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:36:25,675] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:36:40,233] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:36:40,234] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:36:40,266] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:36:50,390] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:37:00,419] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:37:00,448] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:37:00,449] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848610389, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 23ac5e82cdbb34d75998abbb748e4e3c', 'serverTime': '2024-02-01T20:36:50.389Z'} [2024-02-01 20:37:00,449] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706848610360, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:36:50.360Z'} [2024-02-01 20:37:00,450] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848610344, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 23ac5e82cdbb34d75998abbb748e4e3c", 'serverTime': '2024-02-01T20:36:50.344Z'} [2024-02-01 20:37:00,450] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706848610214, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:36:50.214Z'} [2024-02-01 20:37:00,450] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848610209, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:36:50.209Z'} [2024-02-01 20:37:00,451] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706848600427, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:36:40.427Z'} [2024-02-01 20:37:00,451] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706848600210, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:36:40.210Z'} [2024-02-01 20:37:00,451] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848600196, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:36:40.196Z'} [2024-02-01 20:37:00,452] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706848600173, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:36:40.173Z'} [2024-02-01 20:37:00,452] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706848596968, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:36:36.968Z'} [, , , , , ] Thu Feb 1 20:37:00 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:37:00,462] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:37:00,473] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:37:00 2024 [2024-02-01 20:37:00,482] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:37:00,484] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:37:00,584] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:37:00,587] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:37:00,608] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:37:00,654] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:37:00,760] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:37:00,791] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:37:00,804] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:37:00,850] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:37:01,102] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:37:01,105] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2037-diag.zip [2024-02-01 20:37:01,137] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:37:01,139] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2037-diag.zip [2024-02-01 20:37:01,148] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:37:01,150] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2037-diag.zip [2024-02-01 20:37:01,165] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:37:01,167] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2037-diag.zip [2024-02-01 20:38:50,887] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:38:51,058] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2037-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:38:51,309] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2037-diag.zip [2024-02-01 20:38:51,359] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:38:51,737] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:38:51,912] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2037-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:38:52,175] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2037-diag.zip [2024-02-01 20:38:52,225] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:39:21,430] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:39:21,564] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2037-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:39:21,777] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2037-diag.zip [2024-02-01 20:39:21,827] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:39:51,374] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:39:51,556] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2037-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:39:51,786] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2037-diag.zip [2024-02-01 20:39:51,841] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 8 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_8 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 145.795s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_9 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=network_delay,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'network_delay', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 9, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_9'} [2024-02-01 20:39:51,864] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:39:51,964] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:39:52,110] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:39:52,381] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:39:52,403] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:39:52,448] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:39:52,448] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #9 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:39:52,448] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:39:52,479] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:39:52,480] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:39:52,510] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:39:52,510] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:39:52,511] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:39:52,540] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:39:52,540] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.5375000089406967, 'mem_free': 15774556160, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:39:52,541] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4124999977648258, 'mem_free': 15753129984, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:39:52,541] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.262499995529652, 'mem_free': 15546220544, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:39:52,541] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:39:52,545] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:39:52,684] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:39:52,819] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:39:53,122] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:39:53,128] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:39:53,229] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:39:53,367] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:39:53,681] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:39:53,686] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:39:53,786] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:39:53,913] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:39:54,219] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:39:54,224] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:39:54,361] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:39:54,511] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:39:54,818] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:01,178] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:40:01,178] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:40:01,183] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:40:01,214] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:40:01,214] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:40:01,244] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:40:01,244] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:40:11,669] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:40:11,675] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:40:11,855] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:40:12,052] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:12,380] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:12,423] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:40:12,599] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:40:12,737] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:13,077] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:13,136] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:13,314] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:40:13,315] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:40:14,687] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:14,688] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:14,706] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:40:14,706] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:40:14,714] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:14,715] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:40:14,768] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:14,773] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:40:14,913] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:40:15,054] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:15,357] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:15,410] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:15,411] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:15,463] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:40:15,633] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:40:15,633] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:40:15,644] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:15,644] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:40:20,649] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:40:20,663] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:20,664] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:40:20,664] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:20,720] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2825139 [2024-02-01 20:40:20,722] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:40:20,725] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:40:20,821] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:40:21,002] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:21,260] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:21,312] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:40:21,449] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:40:21,594] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:21,912] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:21,974] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:22,157] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:40:22,158] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:40:24,317] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:24,317] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:24,333] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:40:24,335] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:40:24,388] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:40:24,390] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:40:24,390] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:40:24,390] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:40:24,391] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:40:24,391] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:40:24,391] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:40:24,392] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:40:24,441] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:24,446] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:40:24,590] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:40:24,727] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:25,043] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:25,104] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:25,106] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:25,160] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:40:25,341] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:40:25,342] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:40:25,355] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:25,356] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:40:30,361] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:40:30,378] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:30,378] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:40:30,379] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:30,437] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3934473 [2024-02-01 20:40:30,438] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:40:30,441] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:40:30,538] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:40:30,740] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:31,061] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:31,104] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:40:31,277] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:40:31,416] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:31,730] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:31,793] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:31,972] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:40:31,973] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:40:34,256] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:34,258] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:34,273] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:40:34,274] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:40:34,283] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:34,284] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:40:34,335] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:34,341] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:40:34,479] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:40:34,623] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:34,946] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:35,008] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:35,009] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:35,067] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:40:35,243] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:40:35,243] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:40:35,258] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:35,259] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:40:40,265] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:40:40,280] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:40,281] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:40:40,281] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:40,337] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3284984 [2024-02-01 20:40:40,338] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:40:40,343] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:40:40,480] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:40:40,683] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:41,016] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:41,057] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:40:41,200] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:40:41,337] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:41,616] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:41,679] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:41,859] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:40:41,859] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:40:43,252] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:43,253] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:43,269] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:40:43,271] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:40:43,279] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:43,280] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:40:43,327] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:43,331] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:40:43,509] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:40:43,651] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:40:43,967] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:40:44,030] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:40:44,031] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:44,086] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:40:44,266] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:40:44,266] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:40:44,280] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:44,281] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:40:49,281] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:40:49,296] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:40:49,296] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:40:49,297] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:40:49,355] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3288761 [2024-02-01 20:40:49,357] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:40:49,363] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:40:49,375] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:40:49,385] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:40:49,394] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:40:52,399] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:40:58,410] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:40:59,295] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:40:59,296] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:40:59,301] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15813439488, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:40:59,304] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:40:59,305] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:40:59,313] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:40:59,313] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:40:59,350] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:40:59,350] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:40:59,479] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:40:59,483] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:40:59,620] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:40:59,763] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:41:00,094] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:41:00,097] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:41:00,166] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:41:00,167] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:41:00,184] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:00,199] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:00,215] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:41:00,268] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:41:00,269] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:41:00,274] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15774801920, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:41:00,277] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:41:00,278] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:41:00,286] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:41:00,287] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:41:00,424] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:41:00,427] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:41:00,563] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:41:00,706] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:41:00,988] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:41:00,990] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:41:01,062] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:41:01,063] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:41:01,082] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:01,099] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:01,119] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:41:01,175] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:41:01,176] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:41:01,181] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15782305792, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:41:01,186] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:41:01,187] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:41:01,194] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:41:01,195] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:41:01,338] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:41:01,341] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:41:01,479] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:41:01,622] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:41:01,895] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:41:01,898] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:41:01,965] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:41:01,966] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:41:01,983] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:01,997] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:02,014] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:41:02,066] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:41:02,067] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:41:02,072] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15736643584, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:41:02,076] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:41:02,077] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:41:02,085] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:41:02,086] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:41:02,238] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:41:02,242] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:41:02,378] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:41:04,367] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:41:04,679] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:41:04,682] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:41:04,752] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:41:04,753] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:41:04,768] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:04,789] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:41:04,805] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:41:04,858] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:41:04,917] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:41:04,918] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:41:05,119] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:41:10,122] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:41:10,174] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:41:10,209] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:41:10,864] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:41:10,898] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:41:20,939] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:41:20,939] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:41:35,684] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:41:35,720] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:41:45,752] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 20:41:45,753] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:42:00,153] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:42:00,153] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:42:00,185] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:42:10,316] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:42:20,344] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:42:20,369] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:42:20,370] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848930313, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = e89b2835366f325b0c237d2899305ac1', 'serverTime': '2024-02-01T20:42:10.313Z'} [2024-02-01 20:42:20,371] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706848930282, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:42:10.282Z'} [2024-02-01 20:42:20,371] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706848930267, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = e89b2835366f325b0c237d2899305ac1", 'serverTime': '2024-02-01T20:42:10.267Z'} [2024-02-01 20:42:20,372] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706848930135, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:42:10.135Z'} [2024-02-01 20:42:20,372] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848930130, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:42:10.130Z'} [2024-02-01 20:42:20,372] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706848920361, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:42:00.361Z'} [2024-02-01 20:42:20,373] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706848920130, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:42:00.130Z'} [2024-02-01 20:42:20,373] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706848920116, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:42:00.116Z'} [2024-02-01 20:42:20,373] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706848920096, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:42:00.096Z'} [2024-02-01 20:42:20,373] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706848916702, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:41:56.702Z'} [, , , , , ] Thu Feb 1 20:42:20 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:42:20,384] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:42:20,390] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:42:20,392] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:42:20 2024 [2024-02-01 20:42:20,402] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:42:20,574] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:42:20,578] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:42:20,580] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:42:20,584] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:42:20,790] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:42:20,792] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:42:20,806] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:42:20,825] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:42:21,138] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:42:21,143] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2042-diag.zip [2024-02-01 20:42:21,145] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:42:21,150] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2042-diag.zip [2024-02-01 20:42:21,154] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:42:21,158] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:42:21,160] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2042-diag.zip Collecting logs from 172.23.123.206 [2024-02-01 20:42:21,162] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2042-diag.zip [2024-02-01 20:44:10,932] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:44:11,114] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2042-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:44:11,381] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2042-diag.zip [2024-02-01 20:44:11,431] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:44:12,413] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:44:12,589] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2042-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:44:12,896] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2042-diag.zip [2024-02-01 20:44:12,945] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:44:41,318] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:44:41,389] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2042-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:44:41,511] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2042-diag.zip [2024-02-01 20:44:41,562] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:11,662] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:11,794] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2042-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:45:12,046] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2042-diag.zip [2024-02-01 20:45:12,096] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 9 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_9 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 148.519s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_10 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=disk_readonly,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_readonly', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 10, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_10'} [2024-02-01 20:45:12,116] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:45:12,216] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:45:12,360] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:12,682] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:12,708] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:45:12,759] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:45:12,760] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #10 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:45:12,760] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:45:12,791] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:45:12,791] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:45:12,820] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:45:12,821] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:45:12,821] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:45:12,852] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:45:12,853] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3999999910593033, 'mem_free': 15758823424, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:45:12,853] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3874999843537807, 'mem_free': 15732207616, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:45:12,854] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.300000015646219, 'mem_free': 15531335680, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:45:12,854] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:45:12,857] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:45:12,960] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:45:13,098] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:13,414] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:13,420] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:45:13,555] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:45:13,694] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:14,007] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:14,014] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:45:14,116] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:45:14,259] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:14,580] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:14,587] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:45:14,727] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:45:14,867] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:15,142] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:21,722] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:45:21,722] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:45:21,725] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:45:21,760] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:45:21,760] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:45:21,793] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:45:21,794] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:45:30,501] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:45:30,506] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:45:30,652] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:45:30,791] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:31,102] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:31,143] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:45:31,284] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:45:31,425] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:31,734] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:31,794] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:31,925] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:45:31,926] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:45:33,189] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:33,190] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:33,205] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:45:33,206] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:45:33,213] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:33,214] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:45:33,263] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:33,266] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:45:33,403] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:45:33,540] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:33,814] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:33,874] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:33,874] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:33,929] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:45:34,110] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:45:34,111] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:45:34,124] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:34,124] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:45:39,127] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:45:39,142] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:39,143] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:45:39,143] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:39,203] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2830633 [2024-02-01 20:45:39,204] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:45:39,207] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:45:39,378] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:45:39,579] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:39,885] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:39,926] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:45:40,066] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:45:40,204] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:40,508] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:40,564] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:40,734] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:45:40,734] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:45:42,912] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:42,915] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:42,931] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:45:42,933] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:45:42,982] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:45:42,983] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:45:42,983] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:45:42,984] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:45:42,984] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:45:42,985] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:45:42,986] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:45:42,986] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:45:43,037] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:43,041] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:45:43,178] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:45:43,315] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:43,640] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:43,706] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:43,708] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:43,765] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:45:43,939] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:45:43,940] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:45:43,953] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:43,954] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:45:48,959] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:45:48,979] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:48,979] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:45:48,980] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:49,037] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3939859 [2024-02-01 20:45:49,037] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:45:49,042] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:45:49,183] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:45:49,377] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:49,685] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:49,728] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:45:49,900] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:45:50,039] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:50,355] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:50,415] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:50,591] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:45:50,592] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:45:52,857] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:52,858] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:52,915] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:45:52,916] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:45:52,924] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:52,925] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:45:52,977] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:52,981] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:45:53,157] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:45:53,295] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:53,569] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:53,631] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:45:53,633] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:53,691] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:45:53,869] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:45:53,870] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:45:53,884] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:53,884] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:45:58,890] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:45:58,905] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:45:58,906] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:45:58,906] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:45:58,961] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3290309 [2024-02-01 20:45:58,962] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:45:58,965] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:45:59,067] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:45:59,282] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:45:59,563] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:45:59,608] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:45:59,781] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:45:59,927] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:00,254] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:00,315] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:46:00,497] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:46:00,497] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:46:01,817] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:01,819] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:46:01,833] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:46:01,834] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:46:01,840] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:01,841] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:46:01,889] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:01,892] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:46:01,995] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:46:02,125] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:02,392] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:02,452] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:46:02,453] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:46:02,509] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:46:02,687] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:46:02,687] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:46:02,699] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:02,700] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:46:07,703] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:46:07,718] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:07,719] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:46:07,719] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:46:07,774] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3293958 [2024-02-01 20:46:07,775] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:46:07,781] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:46:07,791] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:46:07,802] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:46:07,811] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:46:10,816] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:46:16,828] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:46:17,827] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:46:17,828] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:46:17,833] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15810920448, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:46:17,836] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:46:17,837] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:46:17,847] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:46:17,848] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:46:17,884] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:46:17,884] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:46:18,028] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:46:18,031] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:46:18,130] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:46:18,272] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:18,550] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:18,552] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:46:18,625] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:18,627] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:46:18,644] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:18,659] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:18,675] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:46:18,730] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:46:18,731] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:46:18,736] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15781466112, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:46:18,739] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:46:18,740] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:46:18,748] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:46:18,749] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:46:18,899] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:46:18,905] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:46:19,045] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:46:19,169] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:19,435] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:19,438] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:46:19,506] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:19,507] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:46:19,523] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:19,534] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:19,546] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:46:19,589] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:46:19,589] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:46:19,592] - [task:166] INFO - {'uptime': '23', 'memoryTotal': 16747917312, 'memoryFree': 15788048384, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:46:19,594] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:46:19,594] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:46:19,600] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:46:19,600] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:46:19,746] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:46:19,750] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:46:19,896] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:46:20,034] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:20,349] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:20,351] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:46:20,418] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:20,419] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:46:20,435] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:20,449] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:20,466] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:46:20,522] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:46:20,523] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:46:20,528] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15719047168, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:46:20,531] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:46:20,532] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:46:20,539] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:46:20,540] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:46:20,691] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:46:20,695] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:46:20,874] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:46:21,014] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:46:21,321] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:46:21,322] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:46:21,391] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:46:21,391] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:46:21,406] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:21,418] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:46:21,432] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:46:21,481] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:46:21,540] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:46:21,541] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:46:21,738] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:46:26,743] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:46:26,789] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:46:26,822] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:46:27,485] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:46:27,519] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:46:37,560] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:46:37,561] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:46:52,024] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:46:52,060] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:47:02,104] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:47:02,104] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:47:16,604] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:47:16,605] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:47:16,636] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:47:26,765] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:47:36,790] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:47:36,810] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:47:36,810] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849246764, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 02fedaddacf2c8f317de9a888f3ca923', 'serverTime': '2024-02-01T20:47:26.764Z'} [2024-02-01 20:47:36,811] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706849246734, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:47:26.734Z'} [2024-02-01 20:47:36,811] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849246717, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 02fedaddacf2c8f317de9a888f3ca923", 'serverTime': '2024-02-01T20:47:26.717Z'} [2024-02-01 20:47:36,812] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706849246586, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:47:26.586Z'} [2024-02-01 20:47:36,812] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849246582, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:47:26.582Z'} [2024-02-01 20:47:36,812] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706849236812, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:47:16.812Z'} [2024-02-01 20:47:36,813] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706849236582, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:47:16.582Z'} [2024-02-01 20:47:36,813] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849236569, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:47:16.569Z'} [2024-02-01 20:47:36,813] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706849236544, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:47:16.544Z'} [2024-02-01 20:47:36,814] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706849233363, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:47:13.363Z'} [, , , , , ] Thu Feb 1 20:47:36 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:47:36,824] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:47:36,828] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:47:36,834] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:47:36 2024 [2024-02-01 20:47:36,848] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:47:36,951] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:47:36,981] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:47:37,013] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:47:37,021] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:47:37,131] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:47:37,149] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:47:37,203] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:47:37,211] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:47:37,474] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:47:37,476] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2047-diag.zip [2024-02-01 20:47:37,490] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:47:37,491] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2047-diag.zip [2024-02-01 20:47:37,501] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:47:37,503] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2047-diag.zip [2024-02-01 20:47:37,542] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:47:37,545] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2047-diag.zip [2024-02-01 20:49:27,345] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:49:27,481] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2047-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:49:27,781] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2047-diag.zip [2024-02-01 20:49:27,831] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:49:28,483] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:49:28,665] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2047-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:49:28,980] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2047-diag.zip [2024-02-01 20:49:29,029] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:02,903] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:03,082] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2047-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:50:03,313] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2047-diag.zip [2024-02-01 20:50:03,362] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:28,003] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:28,139] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2047-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:50:28,389] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2047-diag.zip [2024-02-01 20:50:28,438] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 10 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_10 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 144.708s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_11 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=limit_file_limits,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'limit_file_limits', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 11, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_11'} [2024-02-01 20:50:28,461] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:50:28,563] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:50:28,705] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:28,983] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:29,007] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:50:29,055] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:50:29,056] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #11 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:50:29,056] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:50:29,087] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:50:29,088] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:50:29,118] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:50:29,119] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:50:29,119] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:50:29,149] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:50:29,149] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3500000014901161, 'mem_free': 15789441024, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:50:29,149] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4749999940395355, 'mem_free': 15761567744, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:50:29,150] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.737500011920929, 'mem_free': 15557050368, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:50:29,150] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:50:29,154] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:50:29,291] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:50:29,429] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:29,704] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:29,709] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:50:29,809] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:50:29,966] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:30,281] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:30,289] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:50:30,391] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:50:30,537] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:30,848] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:30,854] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:50:30,989] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:50:31,130] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:31,449] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:37,745] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:50:37,746] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:50:37,748] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:50:37,782] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:50:37,782] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:50:37,814] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:50:37,815] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:50:46,595] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:50:46,597] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:50:46,734] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:50:46,864] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:47,124] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:47,159] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:50:47,294] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:50:47,426] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:47,686] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:47,748] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:50:47,930] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:50:47,931] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:50:49,195] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:49,196] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:50:49,212] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:50:49,212] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:50:49,219] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:49,220] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:50:49,269] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:49,273] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:50:49,370] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:50:49,502] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:49,809] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:49,868] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:50:49,869] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:50:49,924] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:50:50,101] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:50:50,101] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:50:50,114] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:50,114] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:50:55,120] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:50:55,134] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:55,135] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:50:55,135] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:50:55,194] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2836119 [2024-02-01 20:50:55,196] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:50:55,199] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:50:55,341] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:50:55,550] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:55,864] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:55,903] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:50:56,046] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:50:56,188] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:56,503] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:56,562] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:50:56,741] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:50:56,741] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:50:59,061] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:59,061] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:50:59,080] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:50:59,081] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:50:59,136] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:50:59,138] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:50:59,139] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:50:59,139] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:50:59,139] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:50:59,140] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:50:59,140] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:50:59,140] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:50:59,189] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:50:59,194] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:50:59,366] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:50:59,511] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:50:59,824] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:50:59,885] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:50:59,885] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:50:59,944] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:51:00,121] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:51:00,122] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:51:00,134] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:00,135] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:51:05,140] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:51:05,157] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:05,158] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:51:05,158] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:05,214] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3945240 [2024-02-01 20:51:05,215] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:51:05,219] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:51:05,362] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:51:05,567] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:05,880] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:05,924] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:51:06,062] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:51:06,195] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:06,469] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:06,528] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:51:06,615] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:51:06,616] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:51:08,953] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:08,954] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:08,970] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:51:08,971] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:51:08,979] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:08,979] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:51:09,028] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:09,033] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:51:09,131] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:51:09,266] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:09,582] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:09,640] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:51:09,642] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:09,696] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:51:09,878] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:51:09,878] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:51:09,892] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:09,892] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:51:14,898] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:51:14,912] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:14,913] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:51:14,914] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:14,974] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3295608 [2024-02-01 20:51:14,975] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:51:14,979] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:51:15,153] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:51:15,355] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:15,625] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:15,663] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:51:15,804] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:51:15,946] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:16,210] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:16,273] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:51:16,461] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:51:16,462] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:51:17,852] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:17,852] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:17,869] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:51:17,869] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:51:17,876] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:17,877] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:51:17,925] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:17,929] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:51:18,101] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:51:18,229] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:18,492] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:18,552] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:51:18,552] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:18,605] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:51:18,784] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:51:18,785] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:51:18,798] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:18,799] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:51:23,803] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:51:23,818] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:23,819] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:51:23,819] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:51:23,875] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3299145 [2024-02-01 20:51:23,876] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:51:23,884] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:51:23,898] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:51:23,908] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:51:23,917] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:51:26,922] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:51:32,929] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:51:33,846] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:51:33,847] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:51:33,852] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15837089792, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:51:33,856] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:51:33,857] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:51:33,864] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:51:33,864] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:51:33,899] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:51:33,899] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:51:34,047] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:51:34,050] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:51:34,188] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:51:34,330] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:34,661] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:34,664] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:51:34,734] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:34,735] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:51:34,753] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:34,768] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:34,785] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:51:34,845] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:51:34,846] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:51:34,852] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15780970496, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:51:34,856] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:51:34,856] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:51:34,864] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:51:34,864] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:51:35,013] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:51:35,016] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:51:35,194] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:51:35,337] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:35,653] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:35,655] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:51:35,728] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:35,728] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:51:35,746] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:35,760] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:35,776] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:51:35,830] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:51:35,831] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:51:35,836] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15801982976, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:51:35,840] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:51:35,841] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:51:35,850] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:51:35,850] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:51:35,992] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:51:35,995] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:51:36,132] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:51:36,278] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:36,585] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:36,588] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:51:36,659] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:36,660] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:51:36,677] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:36,692] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:36,709] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:51:36,768] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:51:36,769] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:51:36,774] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15727554560, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:51:36,777] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:51:36,778] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:51:36,787] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:51:36,787] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:51:36,943] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:51:36,947] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:51:37,120] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:51:37,262] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:51:37,532] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:51:37,535] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:51:37,601] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:51:37,602] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:51:37,617] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:37,631] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:51:37,646] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:51:37,699] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:51:37,762] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:51:37,764] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:51:37,965] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:51:42,970] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:51:43,018] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:51:43,052] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:51:43,706] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:51:43,738] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:51:53,775] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:51:53,775] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:52:08,109] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:52:08,144] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:52:18,186] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:52:18,187] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:52:32,526] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:52:32,526] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:52:32,554] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:52:42,682] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:52:52,714] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:52:52,735] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:52:52,736] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849562681, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = cc30214141fb107484773054f73cab1a', 'serverTime': '2024-02-01T20:52:42.681Z'} [2024-02-01 20:52:52,736] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706849562651, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:52:42.651Z'} [2024-02-01 20:52:52,736] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849562635, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = cc30214141fb107484773054f73cab1a", 'serverTime': '2024-02-01T20:52:42.635Z'} [2024-02-01 20:52:52,737] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706849562514, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:52:42.514Z'} [2024-02-01 20:52:52,737] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849562509, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:52:42.509Z'} [2024-02-01 20:52:52,737] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706849552732, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:52:32.732Z'} [2024-02-01 20:52:52,738] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706849552508, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:52:32.508Z'} [2024-02-01 20:52:52,738] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849552495, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:52:32.495Z'} [2024-02-01 20:52:52,738] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706849552471, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:52:32.471Z'} [2024-02-01 20:52:52,739] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706849549128, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:52:29.128Z'} [, , , , , ] Thu Feb 1 20:52:52 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 20:52:52,751] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:52:52,754] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:52:52,762] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 20:52:52 2024 [2024-02-01 20:52:52,772] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:52:52,912] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:52:52,919] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:52:52,943] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:52:52,948] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:52:53,075] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:52:53,128] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:52:53,134] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:52:53,163] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:52:53,423] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:52:53,426] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2052-diag.zip [2024-02-01 20:52:53,431] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:52:53,434] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2052-diag.zip [2024-02-01 20:52:53,468] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:52:53,474] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:52:53,476] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2052-diag.zip Collecting logs from 172.23.123.157 [2024-02-01 20:52:53,481] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2052-diag.zip [2024-02-01 20:54:43,581] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:54:43,757] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2052-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 20:54:44,085] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2052-diag.zip [2024-02-01 20:54:44,134] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:54:44,848] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:54:45,026] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2052-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 20:54:45,343] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2052-diag.zip [2024-02-01 20:54:45,391] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:55:13,978] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:55:14,110] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2052-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 20:55:14,351] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2052-diag.zip [2024-02-01 20:55:14,404] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:55:43,996] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:55:44,177] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2052-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 20:55:44,431] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2052-diag.zip [2024-02-01 20:55:44,486] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 11 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_11 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 144.288s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_12 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=limit_file_size_limit,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'limit_file_size_limit', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 12, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_12'} [2024-02-01 20:55:44,508] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:55:44,646] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:55:44,793] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:55:45,109] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:55:45,134] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 20:55:45,183] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:55:45,183] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #12 test_system_failure_create_drop_indexes_simple============== [2024-02-01 20:55:45,184] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 20:55:45,213] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:55:45,213] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:55:45,243] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:55:45,243] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:55:45,244] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 20:55:45,274] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 20:55:45,275] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3749999962747097, 'mem_free': 15739654144, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:55:45,275] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3875000029802322, 'mem_free': 15732477952, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:55:45,275] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.237500000745058, 'mem_free': 15564374016, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 20:55:45,276] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 20:55:45,279] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:55:45,455] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:55:45,599] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:55:45,913] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:55:45,918] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:55:46,055] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:55:46,195] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:55:46,507] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:55:46,513] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:55:46,615] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:55:46,762] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:55:47,097] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:55:47,103] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:55:47,203] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:55:47,702] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:55:48,023] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:55:54,385] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 20:55:54,386] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 20:55:54,388] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 20:55:54,424] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 20:55:54,425] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 20:55:54,457] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 20:55:54,459] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 20:56:04,325] - [basetestcase:229] INFO - initializing cluster [2024-02-01 20:56:04,331] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:56:04,470] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:56:04,613] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:04,884] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:04,925] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:56:05,066] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:56:05,204] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:05,517] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:05,578] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:05,756] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:56:05,756] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 20:56:07,189] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:07,190] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:07,207] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:56:07,208] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:56:07,215] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:07,216] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:56:07,266] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:07,270] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:56:07,411] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:56:07,552] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:07,866] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:07,926] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:07,929] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:07,984] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:56:08,160] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:56:08,161] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 20:56:08,175] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:08,175] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:56:13,181] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:56:13,197] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:13,198] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:56:13,198] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:13,254] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2841611 [2024-02-01 20:56:13,254] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:56:13,259] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:56:13,401] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:56:13,606] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:13,920] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:13,957] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:56:14,097] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:56:14,240] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:14,556] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:14,614] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:14,791] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:56:14,792] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 20:56:17,064] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:17,065] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:17,124] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:56:17,125] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:56:17,178] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 20:56:17,179] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 20:56:17,179] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 20:56:17,180] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 20:56:17,181] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 20:56:17,181] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 20:56:17,182] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 20:56:17,182] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:56:17,228] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:17,233] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:56:17,402] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:56:17,546] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:17,819] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:17,883] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:17,885] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:17,941] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:56:18,116] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:56:18,116] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 20:56:18,129] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:18,130] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:56:23,132] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:56:23,150] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:23,151] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:56:23,151] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:23,210] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3950635 [2024-02-01 20:56:23,210] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:56:23,215] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:56:23,355] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:56:23,554] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:23,856] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:23,894] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:56:23,990] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:56:24,120] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:24,432] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:24,485] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:24,562] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:56:24,562] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 20:56:26,864] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:26,865] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:26,881] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:56:26,882] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:56:26,890] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:26,890] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:56:26,941] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:26,946] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:56:27,116] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:56:27,250] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:27,515] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:27,572] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:27,572] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:27,628] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:56:27,705] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:56:27,705] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 20:56:27,715] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:27,715] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:56:32,721] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:56:32,737] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:32,737] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:56:32,737] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:32,793] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3300914 [2024-02-01 20:56:32,795] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:56:32,799] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:56:32,899] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:56:33,099] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:33,370] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:33,411] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:56:33,589] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:56:33,728] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:34,039] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:34,105] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:34,283] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 20:56:34,284] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 20:56:35,638] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:35,638] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:35,655] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 20:56:35,656] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 20:56:35,663] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:35,664] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 20:56:35,714] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:35,718] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:56:35,893] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:56:36,029] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:36,342] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:36,404] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 20:56:36,404] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:36,462] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 20:56:36,637] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 20:56:36,638] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 20:56:36,650] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:36,651] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 20:56:41,655] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 20:56:41,669] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:41,670] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 20:56:41,671] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 20:56:41,727] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3304327 [2024-02-01 20:56:41,728] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 20:56:41,734] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:56:41,747] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:56:41,758] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:56:41,767] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:56:44,772] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 20:56:50,784] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:56:51,488] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 20:56:51,489] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 20:56:51,494] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15811850240, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:56:51,498] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:56:51,499] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:56:51,506] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 20:56:51,507] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 20:56:51,542] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:56:51,542] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 20:56:51,689] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:56:51,693] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:56:51,867] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:56:52,006] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:52,328] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:52,331] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:56:52,400] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:52,400] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:56:52,417] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:52,430] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:52,446] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:56:52,498] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 20:56:52,500] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 20:56:52,505] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15774617600, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:56:52,508] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:56:52,509] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:56:52,516] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:56:52,517] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 20:56:52,672] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:56:52,676] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:56:52,821] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:56:52,961] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:53,275] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:53,278] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:56:53,348] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:53,348] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:56:53,367] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:53,381] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:53,397] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:56:53,451] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 20:56:53,452] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 20:56:53,456] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15798632448, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:56:53,460] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:56:53,461] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:56:53,468] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:56:53,469] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 20:56:53,641] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:56:53,644] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:56:53,782] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:56:53,918] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:54,188] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:54,190] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:56:54,257] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:54,258] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:56:54,276] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:54,290] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:54,307] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:56:54,365] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 20:56:54,366] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 20:56:54,371] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15761698816, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 20:56:54,374] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 20:56:54,375] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 20:56:54,384] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 20:56:54,384] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 20:56:54,528] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 20:56:54,531] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:56:54,671] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:56:54,815] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:56:55,085] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 20:56:55,088] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 20:56:55,158] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 20:56:55,159] - [remote_util:5237] INFO - ['ok'] [2024-02-01 20:56:55,177] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:55,191] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 20:56:55,207] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 20:56:55,259] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 20:56:55,318] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 20:56:55,319] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 20:56:55,515] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 20:57:00,521] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 20:57:00,569] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 20:57:00,600] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 20:57:01,266] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 20:57:01,302] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:57:11,341] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:57:11,342] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:57:26,372] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 20:57:26,409] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 20:57:36,449] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 20:57:36,450] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 20:57:51,061] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 20:57:51,062] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 20:57:51,094] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 20:58:01,226] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 20:58:11,307] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 20:58:11,342] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 20:58:11,342] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849881225, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = c67d33371175dcb76c154899a8d7f417', 'serverTime': '2024-02-01T20:58:01.225Z'} [2024-02-01 20:58:11,345] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706849881193, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T20:58:01.193Z'} [2024-02-01 20:58:11,345] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706849881178, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = c67d33371175dcb76c154899a8d7f417", 'serverTime': '2024-02-01T20:58:01.178Z'} [2024-02-01 20:58:11,345] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706849881039, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T20:58:01.039Z'} [2024-02-01 20:58:11,346] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849881035, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T20:58:01.035Z'} [2024-02-01 20:58:11,346] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706849871273, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T20:57:51.273Z'} [2024-02-01 20:58:11,347] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706849871036, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T20:57:51.036Z'} [2024-02-01 20:58:11,348] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706849871023, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T20:57:51.023Z'} [2024-02-01 20:58:11,348] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706849871004, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T20:57:51.004Z'} [2024-02-01 20:58:11,349] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706849867761, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T20:57:47.761Z'} [, , , , , , , , , , , , ] [, , , , , ] Thu Feb 1 20:58:11 2024 Cluster instance shutdown with force [, , , ] Thu Feb 1 20:58:11 2024 [2024-02-01 20:58:11,599] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 20:58:11,604] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 20:58:11,604] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 20:58:11,653] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 20:58:11,790] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 20:58:11,799] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 20:58:11,830] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 20:58:11,832] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 20:58:11,983] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:58:11,991] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:58:12,009] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:58:12,013] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 20:58:12,267] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 20:58:12,271] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2058-diag.zip [2024-02-01 20:58:12,308] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 20:58:12,309] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2058-diag.zip [2024-02-01 20:58:12,321] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 20:58:12,322] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2058-diag.zip [2024-02-01 20:58:12,332] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 20:58:12,334] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2058-diag.zip [2024-02-01 21:00:01,980] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:00:02,162] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2058-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:00:02,511] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2058-diag.zip [2024-02-01 21:00:02,562] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:00:03,490] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:00:03,624] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2058-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:00:03,987] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2058-diag.zip [2024-02-01 21:00:04,037] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:00:32,704] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:00:32,882] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2058-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:00:33,153] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2058-diag.zip [2024-02-01 21:00:33,203] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:02,876] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:03,055] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2058-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:01:03,400] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2058-diag.zip [2024-02-01 21:01:03,452] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 12 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_12 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 146.889s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_13 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=extra_files_in_log_dir,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'extra_files_in_log_dir', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 13, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_13'} [2024-02-01 21:01:03,606] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:01:03,710] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:01:03,853] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:04,169] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:04,200] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:01:04,248] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:01:04,249] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #13 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:01:04,249] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:01:04,280] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:01:04,281] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:01:04,311] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:01:04,312] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:01:04,313] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:01:04,342] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:01:04,343] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4749999940395355, 'mem_free': 15781871616, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:01:04,343] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3624999895691872, 'mem_free': 15738642432, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:01:04,344] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.137500002980232, 'mem_free': 15544066048, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:01:04,344] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:01:04,349] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:01:04,523] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:01:04,663] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:04,980] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:04,986] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:01:05,085] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:01:05,225] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:05,535] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:05,540] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:01:05,676] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:01:05,827] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:06,151] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:06,158] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:01:06,299] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:01:06,439] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:06,714] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:13,121] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:01:13,122] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:01:13,155] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:01:13,190] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:01:13,191] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:01:13,221] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:01:13,224] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:01:23,776] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:01:23,779] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:01:23,876] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:01:24,073] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:24,347] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:24,388] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:01:24,526] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:01:24,676] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:24,993] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:25,055] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:25,233] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:01:25,234] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:01:26,395] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:26,396] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:26,458] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:01:26,459] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:01:26,470] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:26,471] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:01:26,520] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:26,524] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:01:26,667] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:01:26,814] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:27,093] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:27,157] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:27,158] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:27,224] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:01:27,364] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:01:27,364] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:01:27,379] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:27,379] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:01:32,385] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:01:32,404] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:32,404] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:01:32,405] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:32,470] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2847124 [2024-02-01 21:01:32,472] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:01:32,475] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:01:32,670] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:01:32,912] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:33,210] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:33,260] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:01:33,429] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:01:33,617] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:33,987] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:34,024] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:34,181] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:01:34,182] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:01:36,468] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:36,469] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:36,488] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:01:36,489] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:01:36,542] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:01:36,543] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:01:36,544] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:01:36,544] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:01:36,544] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:01:36,545] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:01:36,545] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:01:36,545] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:01:36,598] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:36,602] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:01:36,775] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:01:36,914] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:37,228] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:37,294] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:37,296] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:37,352] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:01:37,531] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:01:37,531] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:01:37,546] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:37,547] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:01:42,552] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:01:42,572] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:42,573] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:01:42,573] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:42,628] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3956010 [2024-02-01 21:01:42,630] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:01:42,634] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:01:42,775] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:01:42,977] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:43,293] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:43,337] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:01:43,516] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:01:43,654] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:43,971] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:44,031] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:44,209] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:01:44,209] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:01:46,470] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:46,471] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:46,488] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:01:46,488] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:01:46,497] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:46,498] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:01:46,549] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:46,552] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:01:46,688] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:01:46,831] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:47,158] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:47,225] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:47,225] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:47,281] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:01:47,461] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:01:47,462] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:01:47,475] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:47,476] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:01:52,482] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:01:52,496] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:52,497] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:01:52,497] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:52,557] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3306238 [2024-02-01 21:01:52,559] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:01:52,563] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:01:52,707] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:01:52,917] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:53,231] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:53,269] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:01:53,407] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:01:53,547] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:53,862] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:53,920] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:54,104] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:01:54,105] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:01:55,447] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:55,448] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:55,466] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:01:55,467] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:01:55,475] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:55,476] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:01:55,526] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:55,529] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:01:55,668] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:01:55,811] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:01:56,090] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:01:56,151] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:01:56,151] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:01:56,210] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:01:56,386] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:01:56,387] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:01:56,399] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:01:56,400] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:02:01,405] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:02:01,420] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:02:01,421] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:02:01,422] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:02:01,480] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3309510 [2024-02-01 21:02:01,480] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:02:01,486] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:02:01,497] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:02:01,509] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:02:01,518] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:02:04,522] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:02:10,533] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:02:11,262] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:02:11,263] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:02:11,268] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15815532544, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:02:11,272] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:02:11,273] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:02:11,282] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:02:11,282] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:02:11,319] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:02:11,320] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:02:11,484] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:02:11,488] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:02:11,663] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:02:11,822] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:02:12,157] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:02:12,158] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:02:12,227] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:02:12,228] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:02:12,244] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:12,258] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:12,275] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:02:12,333] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:02:12,334] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:02:12,340] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15779926016, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:02:12,343] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:02:12,344] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:02:12,352] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:02:12,352] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:02:12,505] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:02:12,508] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:02:12,686] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:02:12,829] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:02:13,156] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:02:13,159] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:02:13,233] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:02:13,233] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:02:13,249] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:13,264] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:13,280] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:02:13,338] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:02:13,340] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:02:13,346] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15769731072, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:02:13,350] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:02:13,351] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:02:13,358] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:02:13,359] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:02:13,499] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:02:13,503] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:02:13,680] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:02:13,827] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:02:14,140] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:02:14,141] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:02:14,211] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:02:14,212] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:02:14,230] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:14,246] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:14,264] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:02:14,319] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:02:14,320] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:02:14,326] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15746916352, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:02:14,331] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:02:14,332] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:02:14,340] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:02:14,341] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:02:14,501] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:02:14,505] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:02:14,685] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:02:14,831] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:02:15,152] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:02:15,155] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:02:15,224] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:02:15,225] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:02:15,242] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:15,258] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:02:15,275] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:02:15,328] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:02:15,390] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:02:15,392] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:02:15,596] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:02:20,602] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:02:20,652] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:02:20,683] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:02:21,335] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:02:21,365] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:02:31,397] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 21:02:31,398] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:02:45,662] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:02:45,706] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:02:55,751] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:02:55,752] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:03:10,037] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:03:10,052] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:03:10,090] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:03:20,225] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:03:30,274] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:03:30,313] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:03:30,314] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850200224, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 1eeea78c971534880c4a127ed5ed6164', 'serverTime': '2024-02-01T21:03:20.224Z'} [2024-02-01 21:03:30,315] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706850200193, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:03:20.193Z'} [2024-02-01 21:03:30,317] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850200177, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 1eeea78c971534880c4a127ed5ed6164", 'serverTime': '2024-02-01T21:03:20.177Z'} [2024-02-01 21:03:30,317] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706850200014, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:03:20.014Z'} [2024-02-01 21:03:30,318] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850200009, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:03:20.009Z'} [2024-02-01 21:03:30,319] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706850190234, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:03:10.234Z'} [2024-02-01 21:03:30,319] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706850190010, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:03:10.010Z'} [2024-02-01 21:03:30,320] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850189996, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:03:09.996Z'} [2024-02-01 21:03:30,320] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706850189975, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:03:09.975Z'} [2024-02-01 21:03:30,321] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706850186681, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:03:06.681Z'} [, , , , , ] Thu Feb 1 21:03:30 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [, , , ] Thu Feb 1 21:03:30 2024 [2024-02-01 21:03:30,597] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:03:30,603] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:03:30,604] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:03:30,605] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:03:30,998] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:03:31,017] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:03:31,076] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:03:31,083] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:03:31,329] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:03:31,342] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:03:31,350] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:03:31,370] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:03:31,687] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:03:31,691] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2103-diag.zip [2024-02-01 21:03:31,698] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:03:31,704] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2103-diag.zip [2024-02-01 21:03:31,708] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:03:31,711] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2103-diag.zip [2024-02-01 21:03:31,719] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:03:31,721] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2103-diag.zip [2024-02-01 21:05:21,812] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:05:21,985] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2103-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:05:22,345] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2103-diag.zip [2024-02-01 21:05:22,396] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:05:23,034] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:05:23,214] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2103-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:05:23,574] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2103-diag.zip [2024-02-01 21:05:23,624] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:05:52,197] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:05:52,371] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2103-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:05:52,646] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2103-diag.zip [2024-02-01 21:05:52,694] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:22,410] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:22,592] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2103-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:06:22,916] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2103-diag.zip [2024-02-01 21:06:22,968] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 13 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_13 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 146.883s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_14 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=dummy_file_in_log_dir,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'dummy_file_in_log_dir', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 14, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_14'} [2024-02-01 21:06:23,172] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:06:23,347] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:06:23,488] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:23,769] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:23,811] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:06:23,863] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:06:23,864] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #14 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:06:23,865] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:06:23,898] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:06:23,899] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:06:23,929] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:06:23,930] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:06:23,930] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:06:23,964] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:06:23,964] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4124999977648258, 'mem_free': 15768231936, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:06:23,965] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4625000059604645, 'mem_free': 15748227072, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:06:23,965] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.775000013411045, 'mem_free': 15545200640, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:06:23,966] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:06:23,986] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:06:24,130] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:06:24,269] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:24,595] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:24,602] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:06:24,739] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:06:24,914] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:25,189] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:25,195] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:06:25,295] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:06:25,442] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:25,757] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:25,764] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:06:27,112] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:06:27,260] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:27,575] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:32,935] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:06:32,936] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:06:33,053] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:06:33,092] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:06:33,092] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:06:33,129] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:06:33,135] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:06:44,915] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:06:44,925] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:06:45,104] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:06:45,307] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:45,623] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:45,668] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:06:45,814] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:06:45,956] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:46,272] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:46,335] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:06:46,522] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:06:46,523] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:06:47,793] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:47,797] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:06:47,815] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:06:47,817] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:06:47,825] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:47,827] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:06:47,878] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:47,883] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:06:48,056] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:06:48,208] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:48,525] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:48,587] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:06:48,591] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:06:48,644] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:06:48,820] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:06:48,821] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:06:48,834] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:48,835] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:06:53,836] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:06:53,851] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:53,852] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:06:53,853] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:06:53,911] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2852627 [2024-02-01 21:06:53,914] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:06:53,919] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:06:54,066] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:06:54,286] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:54,600] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:54,645] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:06:54,819] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:06:54,961] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:55,271] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:55,329] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:06:55,508] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:06:55,509] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:06:57,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:57,671] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:06:57,687] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:06:57,689] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:06:57,741] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:06:57,743] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:06:57,743] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:06:57,744] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:06:57,744] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:06:57,745] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:06:57,745] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:06:57,746] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:06:57,794] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:57,798] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:06:57,974] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:06:58,118] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:06:58,439] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:06:58,500] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:06:58,500] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:06:58,561] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:06:58,739] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:06:58,740] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:06:58,753] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:06:58,754] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:07:03,757] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:07:03,775] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:03,775] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:07:03,777] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:03,837] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3961384 [2024-02-01 21:07:03,837] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:07:03,841] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:07:03,976] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:07:04,170] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:04,481] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:04,523] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:07:04,700] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:07:04,840] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:05,153] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:05,215] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:07:05,394] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:07:05,395] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:07:07,661] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:07,662] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:07,676] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:07:07,677] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:07:07,684] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:07,685] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:07:07,740] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:07,745] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:07:07,916] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:07:08,064] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:08,377] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:08,439] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:07:08,440] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:08,497] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:07:08,673] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:07:08,674] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:07:08,685] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:08,686] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:07:13,692] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:07:13,706] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:13,707] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:07:13,707] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:13,764] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3311549 [2024-02-01 21:07:13,764] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:07:13,766] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:07:13,867] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:07:14,054] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:14,315] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:14,356] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:07:14,529] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:07:14,667] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:14,970] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:15,031] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:07:15,211] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:07:15,211] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:07:16,447] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:16,450] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:16,464] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:07:16,465] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:07:16,473] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:16,474] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:07:16,523] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:16,528] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:07:16,668] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:07:16,808] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:17,118] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:17,183] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:07:17,184] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:17,242] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:07:17,423] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:07:17,424] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:07:17,435] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:17,437] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:07:22,443] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:07:22,460] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:22,460] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:07:22,461] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:07:22,521] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3314705 [2024-02-01 21:07:22,522] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:07:22,528] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:07:22,580] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:07:22,594] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:07:22,606] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:07:25,610] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:07:31,620] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:07:32,165] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:07:32,168] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:07:32,177] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15814754304, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:07:32,182] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:07:32,185] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:07:32,195] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:07:32,197] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:07:32,235] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:07:32,235] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:07:32,390] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:07:32,393] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:07:32,530] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:07:32,682] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:33,049] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:33,059] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:07:33,122] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:33,124] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:07:33,139] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:33,155] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:33,173] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:07:33,235] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:07:33,236] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:07:33,241] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15788675072, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:07:33,245] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:07:33,246] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:07:33,254] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:07:33,255] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:07:33,405] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:07:33,409] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:07:33,582] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:07:33,729] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:34,041] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:34,042] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:07:34,112] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:34,114] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:07:34,129] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:34,143] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:34,159] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:07:34,213] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:07:34,214] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:07:34,220] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15778643968, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:07:34,223] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:07:34,224] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:07:34,232] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:07:34,232] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:07:34,387] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:07:34,390] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:07:34,530] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:07:34,674] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:34,989] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:34,991] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:07:35,062] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:35,062] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:07:35,078] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:35,092] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:35,109] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:07:35,164] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:07:35,165] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:07:35,169] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15730212864, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:07:35,173] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:07:35,174] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:07:35,182] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:07:35,183] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:07:35,338] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:07:35,343] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:07:35,517] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:07:35,657] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:07:35,971] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:07:35,975] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:07:36,046] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:07:36,048] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:07:36,063] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:36,077] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:07:36,095] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:07:36,154] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:07:36,226] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:07:36,256] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:07:36,461] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:07:41,463] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:07:41,515] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:07:41,546] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:07:42,152] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:07:42,186] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:07:52,228] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:07:52,229] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:08:06,833] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:08:06,869] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:08:16,905] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:08:16,906] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:08:31,195] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:08:31,195] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:08:31,224] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:08:41,355] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:08:51,381] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:08:51,400] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:08:51,401] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850521353, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 3cf5684e946e6e781ea0eecdc44a4134', 'serverTime': '2024-02-01T21:08:41.353Z'} [2024-02-01 21:08:51,401] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706850521324, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:08:41.324Z'} [2024-02-01 21:08:51,401] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850521309, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 3cf5684e946e6e781ea0eecdc44a4134", 'serverTime': '2024-02-01T21:08:41.309Z'} [2024-02-01 21:08:51,401] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706850521173, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:08:41.173Z'} [2024-02-01 21:08:51,402] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850521169, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:08:41.169Z'} [2024-02-01 21:08:51,402] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706850511403, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:08:31.403Z'} [2024-02-01 21:08:51,402] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706850511170, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:08:31.170Z'} [2024-02-01 21:08:51,402] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850511156, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:08:31.156Z'} [2024-02-01 21:08:51,403] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706850511135, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:08:31.135Z'} [2024-02-01 21:08:51,403] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706850507850, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:08:27.850Z'} [, , , , , ] Thu Feb 1 21:08:51 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 21:08:51,437] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:08:51,439] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:08:51,441] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:08:51,444] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 21:08:51 2024 [2024-02-01 21:08:51,581] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:08:51,589] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:08:51,618] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:08:51,636] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:08:51,750] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:08:51,756] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:08:51,802] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:08:51,808] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:08:52,046] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:08:52,047] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2108-diag.zip [2024-02-01 21:08:52,084] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:08:52,086] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2108-diag.zip [2024-02-01 21:08:52,123] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:08:52,126] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2108-diag.zip [2024-02-01 21:08:52,134] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:08:52,136] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2108-diag.zip [2024-02-01 21:10:42,267] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:10:42,339] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2108-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:10:42,528] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2108-diag.zip [2024-02-01 21:10:42,577] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:10:43,345] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:10:43,517] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2108-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:10:43,880] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2108-diag.zip [2024-02-01 21:10:43,929] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:11:17,552] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:11:17,651] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2108-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:11:17,946] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2108-diag.zip [2024-02-01 21:11:17,999] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:11:42,553] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:11:42,624] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2108-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:11:42,884] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2108-diag.zip [2024-02-01 21:11:42,933] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 14 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_14 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 148.277s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_15 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=empty_files_in_log_dir,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'empty_files_in_log_dir', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 15, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_15'} [2024-02-01 21:11:42,960] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:11:43,062] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:11:43,197] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:11:43,469] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:11:43,493] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:11:43,538] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:11:43,538] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #15 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:11:43,539] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:11:43,567] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:11:43,567] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:11:43,596] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:11:43,596] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:11:43,597] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:11:43,626] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:11:43,627] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4124999977648258, 'mem_free': 15744077824, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:11:43,627] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3875000029802322, 'mem_free': 15734562816, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:11:43,627] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.262499995529652, 'mem_free': 15551553536, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:11:43,628] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:11:43,632] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:11:43,770] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:11:43,911] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:11:44,242] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:11:44,248] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:11:44,350] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:11:44,488] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:11:44,760] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:11:44,764] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:11:44,863] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:11:45,008] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:11:45,330] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:11:45,339] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:11:45,480] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:11:45,623] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:11:45,892] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:11:52,641] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:11:52,642] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:11:52,645] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:11:52,677] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:11:52,678] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:11:52,709] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:11:52,710] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:12:02,163] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:12:02,168] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:12:02,265] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:12:02,389] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:02,695] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:02,733] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:12:02,829] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:12:02,960] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:03,261] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:03,321] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:03,501] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:12:03,502] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:12:04,785] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:04,786] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:04,801] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:12:04,802] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:12:04,810] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:04,810] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:12:04,859] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:04,865] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:12:05,035] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:12:05,176] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:05,493] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:05,550] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:05,551] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:05,609] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:12:05,785] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:12:05,786] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:12:05,798] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:05,799] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:12:10,803] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:12:10,818] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:10,819] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:12:10,819] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:10,879] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2858122 [2024-02-01 21:12:10,880] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:12:10,885] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:12:11,055] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:12:11,257] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:11,572] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:11,614] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:12:11,758] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:12:11,901] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:12,208] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:12,274] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:12,451] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:12:12,452] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:12:14,719] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:14,720] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:14,737] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:12:14,738] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:12:14,788] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:12:14,789] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:12:14,790] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:12:14,790] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:12:14,790] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:12:14,791] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:12:14,791] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:12:14,791] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:12:14,838] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:14,842] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:12:14,981] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:12:15,127] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:15,397] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:15,458] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:15,459] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:15,517] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:12:15,658] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:12:15,658] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:12:15,673] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:15,673] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:12:20,679] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:12:20,697] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:20,698] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:12:20,698] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:20,757] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3966756 [2024-02-01 21:12:20,758] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:12:20,761] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:12:20,936] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:12:21,133] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:21,449] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:21,491] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:12:21,664] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:12:21,807] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:22,122] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:22,183] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:22,361] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:12:22,362] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:12:24,675] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:24,677] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:24,693] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:12:24,693] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:12:24,701] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:24,701] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:12:24,752] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:24,756] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:12:24,899] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:12:25,038] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:25,307] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:25,374] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:25,375] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:25,433] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:12:25,615] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:12:25,615] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:12:25,627] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:25,627] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:12:30,632] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:12:30,648] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:30,648] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:12:30,648] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:30,706] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3316858 [2024-02-01 21:12:30,707] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:12:30,711] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:12:30,852] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:12:31,056] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:31,340] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:31,382] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:12:31,553] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:12:31,857] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:32,128] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:32,189] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:32,368] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:12:32,368] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:12:33,646] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:33,647] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:33,663] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:12:33,663] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:12:33,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:33,671] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:12:33,723] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:33,727] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:12:33,864] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:12:34,002] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:34,287] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:34,349] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:12:34,350] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:34,410] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:12:34,598] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:12:34,598] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:12:34,611] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:34,612] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:12:39,617] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:12:39,632] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:39,633] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:12:39,633] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:12:39,692] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3319880 [2024-02-01 21:12:39,693] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:12:39,699] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:12:39,710] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:12:39,721] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:12:39,730] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:12:42,735] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:12:48,744] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:12:49,750] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:12:49,751] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:12:49,756] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15818055680, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:12:49,760] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:12:49,761] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:12:49,770] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:12:49,770] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:12:49,807] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:12:49,808] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:12:49,965] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:12:49,969] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:12:50,106] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:12:50,245] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:50,522] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:50,525] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:12:50,592] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:50,593] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:12:50,608] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:50,626] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:50,642] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:12:50,697] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:12:50,698] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:12:50,704] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15768743936, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:12:50,707] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:12:50,708] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:12:50,716] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:12:50,716] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:12:50,863] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:12:50,867] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:12:51,010] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:12:51,148] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:51,465] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:51,467] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:12:51,538] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:51,539] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:12:51,557] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:51,571] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:51,586] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:12:51,642] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:12:51,643] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:12:51,648] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15777992704, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:12:51,652] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:12:51,653] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:12:51,661] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:12:51,661] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:12:51,807] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:12:51,811] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:12:51,910] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:12:52,052] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:52,330] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:52,332] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:12:52,401] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:52,402] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:12:52,417] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:52,431] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:52,446] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:12:52,498] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:12:52,499] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:12:52,505] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15756562432, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:12:52,509] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:12:52,510] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:12:52,518] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:12:52,518] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:12:52,669] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:12:52,672] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:12:52,849] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:12:52,996] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:12:53,320] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:12:53,322] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:12:53,393] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:12:53,394] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:12:53,413] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:53,428] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:12:53,445] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:12:53,495] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:12:53,562] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:12:53,563] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:12:53,774] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:12:58,778] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:12:58,826] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:12:58,857] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:12:59,501] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:12:59,535] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:13:09,577] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:13:09,578] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:13:23,814] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:13:23,848] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:13:33,887] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:13:33,888] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:13:48,077] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:13:48,078] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:13:48,109] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:13:58,239] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:14:08,266] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:14:08,288] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:14:08,288] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850838238, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 5b6707a7aa83ea9caecd04c1c545702b', 'serverTime': '2024-02-01T21:13:58.238Z'} [2024-02-01 21:14:08,289] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706850838208, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:13:58.208Z'} [2024-02-01 21:14:08,289] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706850838192, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 5b6707a7aa83ea9caecd04c1c545702b", 'serverTime': '2024-02-01T21:13:58.192Z'} [2024-02-01 21:14:08,289] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706850838059, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:13:58.059Z'} [2024-02-01 21:14:08,290] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850838054, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:13:58.054Z'} [2024-02-01 21:14:08,290] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706850828276, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:13:48.276Z'} [2024-02-01 21:14:08,290] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706850828054, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:13:48.054Z'} [2024-02-01 21:14:08,291] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706850828040, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:13:48.040Z'} [2024-02-01 21:14:08,291] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706850828020, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:13:48.020Z'} [2024-02-01 21:14:08,291] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706850824831, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:13:44.831Z'} [, , , , , ] Thu Feb 1 21:14:08 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 21:14:08,302] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:14:08,307] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:14:08,314] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 21:14:08 2024 [2024-02-01 21:14:08,322] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:14:08,468] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:14:08,471] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:14:08,483] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:14:08,487] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:14:08,676] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:14:08,679] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:14:08,703] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:14:08,710] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:14:09,005] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:14:09,007] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2114-diag.zip [2024-02-01 21:14:09,020] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:14:09,022] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2114-diag.zip [2024-02-01 21:14:09,038] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:14:09,041] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2114-diag.zip [2024-02-01 21:14:09,043] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:14:09,048] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2114-diag.zip [2024-02-01 21:15:59,152] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:15:59,336] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2114-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:15:59,748] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2114-diag.zip [2024-02-01 21:15:59,799] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:00,402] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:00,576] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2114-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:16:00,927] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2114-diag.zip [2024-02-01 21:16:00,976] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:29,532] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:29,673] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2114-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:16:30,024] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2114-diag.zip [2024-02-01 21:16:30,080] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:59,584] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:16:59,761] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2114-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:17:00,084] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2114-diag.zip [2024-02-01 21:17:00,134] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 15 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_15 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 145.341s FAILED (failures=1) test_kill_indexer_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_16 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=1000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=empty_files_in_log_dir,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'empty_files_in_log_dir', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 16, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_16'} [2024-02-01 21:17:00,155] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:17:00,294] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:17:00,430] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:00,740] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:00,764] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:17:00,811] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:17:00,811] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #16 test_kill_indexer_create_drop_indexes_simple============== [2024-02-01 21:17:00,811] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:17:00,841] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:17:00,841] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:17:00,870] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:17:00,870] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:17:00,871] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:17:00,901] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:17:00,901] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4625000059604645, 'mem_free': 15764070400, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:17:00,902] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3875000029802322, 'mem_free': 15747067904, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:17:00,902] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.049999993294477, 'mem_free': 15571316736, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:17:00,903] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:17:00,906] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:17:01,054] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:17:01,195] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:01,512] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:01,518] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:17:01,659] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:17:01,798] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:02,109] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:02,114] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:17:02,214] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:17:02,362] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:02,677] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:02,682] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:17:02,817] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:17:02,960] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:03,231] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:09,759] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:17:09,759] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:17:09,763] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:17:09,797] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:17:09,798] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:17:09,830] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:17:09,831] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:17:20,097] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:17:20,102] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:17:20,246] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:17:20,454] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:20,761] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:20,807] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:17:20,950] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:17:21,093] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:21,359] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:21,418] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:21,596] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:17:21,597] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:17:22,792] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:22,792] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:22,805] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:17:22,806] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:17:22,813] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:22,814] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:17:22,864] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:22,868] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:17:23,005] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:17:23,138] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:23,448] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:23,506] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:23,506] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:23,569] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:17:23,748] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:17:23,749] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:17:23,761] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:23,762] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:17:28,767] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:17:28,781] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:28,782] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:17:28,782] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:28,836] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2863632 [2024-02-01 21:17:28,837] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:17:28,841] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:17:28,938] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:17:29,142] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:29,462] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:29,506] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:17:29,650] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:17:29,794] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:30,117] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:30,183] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:30,370] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:17:30,371] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:17:32,525] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:32,525] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:32,542] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:17:32,543] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:17:32,594] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:17:32,595] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:17:32,595] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:17:32,595] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:17:32,597] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:17:32,597] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:17:32,598] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:17:32,599] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:17:32,649] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:32,656] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:17:32,827] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:17:32,971] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:33,287] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:33,348] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:33,348] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:33,408] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:17:33,547] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:17:33,548] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:17:33,563] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:33,563] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:17:38,567] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:17:38,586] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:38,586] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:17:38,587] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:38,642] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3972162 [2024-02-01 21:17:38,643] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:17:38,647] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:17:38,820] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:17:39,017] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:39,333] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:39,374] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:17:39,514] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:17:39,661] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:39,971] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:40,032] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:40,210] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:17:40,211] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:17:42,460] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:42,460] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:42,475] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:17:42,476] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:17:42,484] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:42,485] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:17:42,538] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:42,541] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:17:42,719] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:17:42,850] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:43,119] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:43,181] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:43,184] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:43,237] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:17:43,411] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:17:43,411] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:17:43,425] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:43,426] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:17:48,430] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:17:48,446] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:48,447] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:17:48,447] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:48,508] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3322165 [2024-02-01 21:17:48,508] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:17:48,512] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:17:48,706] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:17:48,914] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:49,193] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:49,232] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:17:49,409] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:17:49,553] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:49,823] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:49,886] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:50,068] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:17:50,069] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:17:51,223] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:51,224] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:51,240] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:17:51,241] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:17:51,248] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:51,249] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:17:51,300] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:51,304] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:17:51,441] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:17:51,586] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:17:51,899] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:17:51,962] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:17:51,963] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:52,020] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:17:52,195] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:17:52,196] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:17:52,209] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:52,210] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:17:57,214] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:17:57,229] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:17:57,230] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:17:57,230] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:17:57,287] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3325073 [2024-02-01 21:17:57,288] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:17:57,293] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:17:57,305] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:17:57,316] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:17:57,325] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:18:00,330] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:18:06,341] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:18:06,863] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:18:06,864] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:18:06,870] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15810420736, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:18:06,874] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:18:06,875] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:18:06,884] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:18:06,885] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:18:06,923] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:18:06,924] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:18:07,066] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:18:07,070] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:18:07,248] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:18:07,399] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:18:07,768] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:18:07,771] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:18:07,839] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:18:07,841] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:18:07,856] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:07,869] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:07,885] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:18:07,936] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:18:07,937] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:18:07,943] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15776423936, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:18:07,947] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:18:07,948] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:18:07,956] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:18:07,957] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:18:08,101] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:18:08,105] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:18:08,280] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:18:08,419] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:18:08,747] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:18:08,748] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:18:08,817] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:18:08,818] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:18:08,834] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:08,849] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:08,865] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:18:08,919] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:18:08,920] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:18:08,925] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15781601280, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:18:08,928] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:18:08,929] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:18:08,937] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:18:08,938] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:18:09,079] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:18:09,082] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:18:09,256] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:18:09,399] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:18:09,718] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:18:09,721] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:18:09,789] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:18:09,789] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:18:09,807] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:09,821] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:09,838] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:18:09,894] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:18:09,895] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:18:09,900] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15740846080, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:18:09,905] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:18:09,906] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:18:09,915] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:18:09,915] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:18:10,060] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:18:10,065] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:18:10,242] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:18:10,392] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:18:10,699] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:18:10,702] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:18:10,778] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:18:10,779] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:18:10,795] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:10,809] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:18:10,826] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:18:10,878] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:18:10,940] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:18:10,942] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:18:11,132] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:18:16,134] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:18:16,182] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:18:16,215] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:18:16,883] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:18:16,915] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:18:26,955] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:18:26,956] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:18:41,668] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:18:41,703] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:18:51,742] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:18:51,743] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:19:06,025] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:19:06,025] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:19:06,057] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:19:16,185] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:19:26,210] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:19:26,244] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:19:26,244] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851156184, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = d93f4be8268bc681391e74a85977b4d9', 'serverTime': '2024-02-01T21:19:16.184Z'} [2024-02-01 21:19:26,245] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706851156155, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:19:16.155Z'} [2024-02-01 21:19:26,245] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851156139, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = d93f4be8268bc681391e74a85977b4d9", 'serverTime': '2024-02-01T21:19:16.139Z'} [2024-02-01 21:19:26,246] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706851156005, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:19:16.005Z'} [2024-02-01 21:19:26,246] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851156000, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:19:16.000Z'} [2024-02-01 21:19:26,247] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706851146218, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:19:06.218Z'} [2024-02-01 21:19:26,247] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706851146001, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:19:06.001Z'} [2024-02-01 21:19:26,248] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851145987, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:19:05.987Z'} [2024-02-01 21:19:26,248] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706851145969, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:19:05.969Z'} [2024-02-01 21:19:26,249] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706851142685, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:19:02.685Z'} [, , , , , ] Thu Feb 1 21:19:26 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 21:19:26,261] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:19:26,267] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:19:26,270] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 21:19:26 2024 [2024-02-01 21:19:26,286] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:19:26,418] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:19:26,424] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:19:26,439] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:19:26,447] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:19:26,609] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:19:26,626] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:19:26,652] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:19:26,653] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:19:26,937] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:19:26,939] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2119-diag.zip [2024-02-01 21:19:26,945] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:19:26,946] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2119-diag.zip [2024-02-01 21:19:26,962] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:19:26,964] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2119-diag.zip [2024-02-01 21:19:26,977] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:19:26,979] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2119-diag.zip [2024-02-01 21:21:16,656] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:21:16,835] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2119-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:21:17,286] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2119-diag.zip [2024-02-01 21:21:17,341] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:21:18,229] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:21:18,414] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2119-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:21:18,813] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2119-diag.zip [2024-02-01 21:21:18,866] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:21:52,623] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:21:52,802] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2119-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:21:53,109] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2119-diag.zip [2024-02-01 21:21:53,161] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:17,557] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:17,690] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2119-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:22:18,036] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2119-diag.zip [2024-02-01 21:22:18,088] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 16 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_16 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_kill_indexer_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 146.105s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_17 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=stress_cpu,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,num_failure_iteration=1,concur_system_failure=True,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'stress_cpu', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'num_failure_iteration': '1', 'concur_system_failure': 'True', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 17, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_17'} [2024-02-01 21:22:18,109] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:22:18,243] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:22:18,383] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:18,653] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:18,674] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:22:18,723] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:22:18,723] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #17 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:22:18,724] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:22:18,752] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:22:18,752] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:22:18,780] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:22:18,781] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:22:18,781] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:22:18,810] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:22:18,810] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3500000014901161, 'mem_free': 15747608576, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:22:18,811] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3875000029802322, 'mem_free': 15725973504, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:22:18,811] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.125000014901161, 'mem_free': 15529844736, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:22:18,811] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:22:18,815] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:22:18,954] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:22:19,093] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:19,365] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:19,371] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:22:19,474] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:22:19,614] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:19,938] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:19,945] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:22:20,079] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:22:20,219] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:20,541] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:20,547] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:22:20,726] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:22:20,869] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:21,179] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:27,747] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:22:27,748] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:22:27,751] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:22:27,789] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:22:27,789] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:22:27,822] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:22:27,823] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:22:36,104] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:22:36,110] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:22:36,284] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:22:36,428] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:36,742] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:36,785] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:22:36,922] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:22:37,062] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:37,378] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:37,442] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:37,625] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:22:37,626] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:22:38,994] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:38,995] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:39,011] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:22:39,012] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:22:39,019] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:39,020] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:22:39,072] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:39,076] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:22:39,215] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:22:39,354] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:39,669] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:39,732] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:39,733] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:39,789] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:22:39,969] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:22:39,969] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:22:39,982] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:39,982] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:22:44,987] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:22:45,002] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:45,002] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:22:45,003] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:45,061] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2869121 [2024-02-01 21:22:45,062] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:22:45,066] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:22:45,206] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:22:45,408] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:45,721] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:45,762] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:22:45,898] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:22:46,039] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:46,356] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:46,418] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:46,596] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:22:46,597] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:22:48,863] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:48,864] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:48,923] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:22:48,924] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:22:48,978] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:22:48,980] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:22:48,981] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:22:48,981] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:22:48,982] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:22:48,982] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:22:48,982] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:22:48,983] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:22:49,031] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:49,035] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:22:49,175] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:22:49,308] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:49,625] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:49,689] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:49,690] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:49,749] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:22:49,922] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:22:49,922] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:22:49,935] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:49,936] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:22:54,940] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:22:54,958] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:54,959] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:22:54,960] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:55,023] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3977537 [2024-02-01 21:22:55,023] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:22:55,027] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:22:55,200] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:22:55,403] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:55,672] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:55,714] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:22:55,854] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:22:55,992] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:56,305] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:56,364] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:56,543] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:22:56,544] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:22:58,759] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:58,760] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:58,777] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:22:58,778] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:22:58,785] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:58,786] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:22:58,835] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:58,840] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:22:58,983] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:22:59,121] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:22:59,431] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:22:59,492] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:22:59,492] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:22:59,548] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:22:59,721] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:22:59,721] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:22:59,733] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:22:59,734] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:23:04,734] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:23:04,749] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:04,749] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:23:04,751] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:23:04,809] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3327481 [2024-02-01 21:23:04,810] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:23:04,814] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:23:04,952] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:23:05,162] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:05,474] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:05,517] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:23:05,661] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:23:05,798] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:06,111] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:06,173] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:23:06,351] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:23:06,351] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:23:07,642] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:07,643] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:23:07,658] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:23:07,660] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:23:07,668] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:07,668] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:23:07,718] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:07,724] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:23:07,869] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:23:08,012] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:08,331] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:08,393] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:23:08,393] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:23:08,451] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:23:08,639] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:23:08,639] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:23:08,652] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:08,652] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:23:13,657] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:23:13,676] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:13,677] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:23:13,677] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:23:13,732] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3330272 [2024-02-01 21:23:13,732] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:23:13,738] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:23:13,750] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:23:13,760] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:23:13,769] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:23:16,774] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:23:22,785] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:23:22,848] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:23:22,849] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:23:22,855] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15813775360, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:23:22,859] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:23:22,860] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:23:22,869] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:23:22,870] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:23:22,906] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:23:22,907] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:23:23,058] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:23:23,062] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:23:23,240] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:23:23,386] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:23,727] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:23,729] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:23:23,797] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:23,798] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:23:23,815] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:23,830] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:23,846] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:23:23,904] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:23:23,905] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:23:23,910] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15776899072, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:23:23,914] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:23:23,916] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:23:23,924] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:23:23,924] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:23:24,080] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:23:24,084] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:23:24,220] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:23:24,368] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:24,689] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:24,691] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:23:24,758] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:24,759] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:23:24,776] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:24,792] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:24,809] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:23:24,865] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:23:24,866] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:23:24,872] - [task:166] INFO - {'uptime': '19', 'memoryTotal': 16747917312, 'memoryFree': 15767142400, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:23:24,875] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:23:24,876] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:23:24,884] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:23:24,884] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:23:25,038] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:23:25,041] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:23:25,215] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:23:25,359] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:25,683] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:25,685] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:23:25,756] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:25,757] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:23:25,773] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:25,788] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:25,804] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:23:25,857] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:23:25,858] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:23:25,863] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15725584384, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:23:25,867] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:23:25,868] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:23:25,876] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:23:25,877] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:23:26,036] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:23:26,040] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:23:26,141] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:23:26,284] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:23:26,597] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:23:26,599] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:23:26,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:23:26,671] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:23:26,690] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:26,705] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:23:26,723] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:23:26,779] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:23:26,841] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:23:26,843] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:23:27,037] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:23:32,042] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:23:32,095] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:23:32,129] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:23:32,786] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:23:32,817] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:23:42,859] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:23:42,860] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:23:57,118] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:23:57,152] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:24:07,188] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:24:07,188] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:24:21,689] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:24:21,689] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:24:21,716] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:24:31,841] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:24:41,866] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:24:41,887] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:24:41,888] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851471840, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 16b339047dba85e76ca36687849d5944', 'serverTime': '2024-02-01T21:24:31.840Z'} [2024-02-01 21:24:41,888] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706851471812, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:24:31.812Z'} [2024-02-01 21:24:41,889] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851471794, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 16b339047dba85e76ca36687849d5944", 'serverTime': '2024-02-01T21:24:31.794Z'} [2024-02-01 21:24:41,889] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706851471672, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:24:31.672Z'} [2024-02-01 21:24:41,889] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851471667, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:24:31.667Z'} [2024-02-01 21:24:41,890] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706851461900, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:24:21.900Z'} [2024-02-01 21:24:41,890] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706851461669, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:24:21.669Z'} [2024-02-01 21:24:41,891] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851461653, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:24:21.653Z'} [2024-02-01 21:24:41,891] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706851461633, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:24:21.633Z'} [2024-02-01 21:24:41,891] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706851458361, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:24:18.361Z'} [, , , , , ] Thu Feb 1 21:24:41 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 21:24:41,902] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:24:41,906] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:24:41,915] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:24:41,922] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 21:24:41 2024 [2024-02-01 21:24:42,062] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:24:42,067] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:24:42,100] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:24:42,104] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:24:42,261] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:24:42,263] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:24:42,306] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:24:42,311] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:24:42,573] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:24:42,575] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2124-diag.zip [2024-02-01 21:24:42,609] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:24:42,610] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:24:42,612] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2124-diag.zip Collecting logs from 172.23.123.157 [2024-02-01 21:24:42,615] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2124-diag.zip [2024-02-01 21:24:42,635] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:24:42,637] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2124-diag.zip [2024-02-01 21:26:32,715] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:26:32,801] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2124-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:26:33,168] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2124-diag.zip [2024-02-01 21:26:33,218] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:26:33,879] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:26:33,968] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2124-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:26:34,364] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2124-diag.zip [2024-02-01 21:26:34,413] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:13,581] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:13,764] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2124-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:27:14,058] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2124-diag.zip [2024-02-01 21:27:14,108] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:33,247] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:33,377] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2124-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:27:33,698] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2124-diag.zip [2024-02-01 21:27:33,752] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 17 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_17 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 143.792s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_18 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=stress_ram,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,num_failure_iteration=1,concur_system_failure=True,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'stress_ram', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'num_failure_iteration': '1', 'concur_system_failure': 'True', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 18, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_18'} [2024-02-01 21:27:33,775] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:27:33,879] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:27:34,018] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:34,293] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:34,315] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:27:34,360] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:27:34,361] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #18 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:27:34,361] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:27:34,390] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:27:34,391] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:27:34,421] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:27:34,422] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:27:34,422] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:27:34,453] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:27:34,453] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.3874999843537807, 'mem_free': 15765098496, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:27:34,453] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4749999940395355, 'mem_free': 15725838336, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:27:34,454] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.925000000745058, 'mem_free': 15565520896, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:27:34,454] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:27:34,458] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:27:34,595] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:27:34,735] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:35,044] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:35,050] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:27:35,150] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:27:35,289] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:35,606] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:35,612] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:27:35,744] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:27:35,884] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:36,193] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:36,201] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:27:36,338] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:27:36,479] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:36,795] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:43,662] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:27:43,663] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:27:43,667] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:27:43,701] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:27:43,702] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:27:43,732] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:27:43,733] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:27:52,624] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:27:52,629] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:27:52,808] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:27:52,945] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:53,253] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:53,294] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:27:53,472] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:27:53,613] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:53,927] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:53,986] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:27:54,165] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:27:54,166] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:27:55,381] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:55,382] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:27:55,393] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:27:55,394] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:27:55,400] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:55,400] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:27:55,449] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:55,450] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:27:55,550] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:27:55,687] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:27:56,001] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:27:56,062] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:27:56,063] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:27:56,120] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:27:56,257] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:27:56,258] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:27:56,270] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:27:56,270] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:28:01,276] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:28:01,292] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:01,292] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:28:01,293] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:01,354] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2874606 [2024-02-01 21:28:01,355] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:28:01,359] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:28:01,498] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:28:01,702] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:02,013] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:02,057] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:28:02,232] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:28:02,378] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:02,649] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:02,710] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:02,889] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:28:02,889] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:28:05,123] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:05,124] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:05,139] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:28:05,140] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:28:05,194] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:28:05,194] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:28:05,195] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:28:05,196] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:28:05,196] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:28:05,196] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:28:05,196] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:28:05,197] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:28:05,247] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:05,251] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:28:05,349] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:28:05,483] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:05,749] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:05,804] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:05,804] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:05,858] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:28:05,931] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:28:05,932] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:28:05,943] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:05,943] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:28:10,944] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:28:10,961] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:10,962] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:28:10,962] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:11,019] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3982913 [2024-02-01 21:28:11,020] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:28:11,025] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:28:11,160] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:28:11,369] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:11,685] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:11,727] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:28:11,866] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:28:12,004] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:12,325] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:12,387] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:12,572] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:28:12,573] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:28:14,868] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:14,869] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:14,884] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:28:14,885] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:28:14,894] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:14,895] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:28:14,942] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:14,947] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:28:15,086] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:28:15,227] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:15,490] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:15,545] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:15,545] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:15,601] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:28:15,780] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:28:15,780] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:28:15,795] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:15,795] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:28:20,800] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:28:20,817] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:20,817] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:28:20,817] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:20,874] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3332790 [2024-02-01 21:28:20,874] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:28:20,878] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:28:24,024] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:28:24,225] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:24,490] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:24,525] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:28:24,658] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:28:24,796] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:25,116] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:25,177] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:25,269] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:28:25,270] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:28:26,651] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:26,652] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:26,667] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:28:26,668] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:28:26,675] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:26,676] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:28:26,722] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:26,725] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:28:26,825] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:28:26,951] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:27,211] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:27,274] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:28:27,274] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:27,331] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:28:27,522] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:28:27,522] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:28:27,534] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:27,535] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:28:32,540] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:28:32,553] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:32,553] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:28:32,553] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:28:32,611] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3335457 [2024-02-01 21:28:32,611] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:28:32,616] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:28:32,626] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:28:32,636] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:28:32,643] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:28:35,648] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:28:41,658] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:28:41,770] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:28:41,772] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:28:41,777] - [task:166] INFO - {'uptime': '44', 'memoryTotal': 16747913216, 'memoryFree': 15825313792, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:28:41,783] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:28:41,784] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:28:41,792] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:28:41,793] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:28:41,824] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:28:41,825] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:28:41,958] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:28:41,962] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:28:42,135] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:28:42,276] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:42,605] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:42,607] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:28:42,674] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:42,676] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:28:42,692] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:42,707] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:42,723] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:28:42,784] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:28:42,786] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:28:42,791] - [task:166] INFO - {'uptime': '33', 'memoryTotal': 16747913216, 'memoryFree': 15770128384, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:28:42,795] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:28:42,796] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:28:42,804] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:28:42,805] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:28:42,961] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:28:42,965] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:28:43,154] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:28:43,296] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:43,610] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:43,611] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:28:43,683] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:43,685] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:28:43,702] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:43,718] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:43,735] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:28:43,791] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:28:43,792] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:28:43,797] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15785869312, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:28:43,800] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:28:43,801] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:28:43,811] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:28:43,811] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:28:43,950] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:28:43,954] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:28:44,054] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:28:44,190] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:44,500] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:44,503] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:28:44,574] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:44,575] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:28:44,591] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:44,605] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:44,622] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:28:44,677] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:28:44,678] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:28:44,683] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15728066560, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:28:44,687] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:28:44,689] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:28:44,697] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:28:44,698] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:28:44,857] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:28:44,861] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:28:45,034] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:28:45,178] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:28:45,491] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:28:45,496] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:28:45,563] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:28:45,563] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:28:45,580] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:45,594] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:28:45,612] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:28:45,663] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:28:45,723] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:28:45,724] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:28:45,920] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:28:50,925] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:28:50,971] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:28:51,004] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:28:51,670] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:28:51,703] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:29:01,734] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 21:29:01,735] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:29:16,389] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:29:16,423] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:29:26,464] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:29:26,464] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:29:40,708] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:29:40,709] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:29:40,740] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:29:50,870] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:30:00,897] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:30:00,917] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:30:00,918] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851790869, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 579021c4c7d287beaad9672df79c0021', 'serverTime': '2024-02-01T21:29:50.869Z'} [2024-02-01 21:30:00,918] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706851790839, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:29:50.839Z'} [2024-02-01 21:30:00,918] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706851790823, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 579021c4c7d287beaad9672df79c0021", 'serverTime': '2024-02-01T21:29:50.823Z'} [2024-02-01 21:30:00,919] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706851790690, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:29:50.690Z'} [2024-02-01 21:30:00,919] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851790685, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:29:50.685Z'} [2024-02-01 21:30:00,919] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706851780893, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:29:40.893Z'} [2024-02-01 21:30:00,919] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706851780685, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:29:40.685Z'} [2024-02-01 21:30:00,920] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706851780656, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:29:40.656Z'} [2024-02-01 21:30:00,920] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706851780638, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:29:40.638Z'} [2024-02-01 21:30:00,920] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706851777358, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:29:37.358Z'} [, , , , , ] Thu Feb 1 21:30:00 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [2024-02-01 21:30:00,931] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:30:00,936] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:30:00,938] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:30:00,948] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [, , , ] Thu Feb 1 21:30:00 2024 [2024-02-01 21:30:01,120] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:30:01,123] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:30:01,128] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:30:01,131] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:30:01,331] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:30:01,373] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:30:01,377] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:30:01,380] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:30:01,668] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:30:01,673] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:30:01,674] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2130-diag.zip Collecting logs from 172.23.123.206 [2024-02-01 21:30:01,675] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2130-diag.zip [2024-02-01 21:30:01,703] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:30:01,707] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:30:01,710] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2130-diag.zip Collecting logs from 172.23.123.207 [2024-02-01 21:30:01,712] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2130-diag.zip [2024-02-01 21:31:51,951] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:31:52,130] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2130-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:31:52,532] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2130-diag.zip [2024-02-01 21:31:52,582] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:31:52,846] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:31:52,988] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2130-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:31:53,417] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2130-diag.zip [2024-02-01 21:31:53,473] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:32:27,365] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:32:27,551] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2130-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:32:27,892] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2130-diag.zip [2024-02-01 21:32:27,941] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:32:52,348] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:32:52,529] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2130-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:32:52,847] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2130-diag.zip [2024-02-01 21:32:52,898] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 18 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_18 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 147.157s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_19 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=2,percent_update=30,percent_delete=10,system_failure=disk_failure,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=10,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '2', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_failure', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '10', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 19, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_19'} [2024-02-01 21:32:52,919] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:32:53,051] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:32:53,193] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:32:53,507] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:32:53,530] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:32:53,576] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:32:53,577] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #19 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:32:53,577] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:32:53,607] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:32:53,607] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:32:53,637] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:32:53,638] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:32:53,638] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:32:53,666] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:32:53,666] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4250000044703484, 'mem_free': 15757180928, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:32:53,666] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3500000014901161, 'mem_free': 15732432896, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:32:53,667] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.012499991804361, 'mem_free': 15538556928, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:32:53,667] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:32:53,670] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:32:53,810] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:32:53,945] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:32:54,252] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:32:54,258] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:32:54,359] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:32:54,500] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:32:54,769] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:32:54,775] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:32:54,879] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:32:56,418] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:32:56,730] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:32:56,732] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:32:57,877] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:32:58,017] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:32:58,325] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:03,976] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:33:03,977] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:33:03,978] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:33:04,011] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:33:04,011] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:33:04,044] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:33:04,045] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:33:13,876] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:33:13,881] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:33:14,023] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:33:14,162] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:14,473] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:14,514] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:33:14,650] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:33:14,786] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:15,104] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:15,161] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:15,334] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:33:15,334] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:33:16,598] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:16,599] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:16,614] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:33:16,615] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:33:16,623] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:16,624] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:33:16,673] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:16,679] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:33:16,819] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:33:16,958] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:17,268] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:17,325] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:17,325] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:17,382] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:33:17,557] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:33:17,558] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:33:17,571] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:17,571] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:33:22,572] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:33:22,588] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:22,588] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:33:22,589] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:22,646] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2880104 [2024-02-01 21:33:22,646] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:33:22,651] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:33:22,824] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:33:23,024] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:23,341] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:23,383] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:33:23,552] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:33:23,695] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:24,009] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:24,079] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:24,265] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:33:24,266] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:33:26,468] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:26,469] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:26,486] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:33:26,487] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:33:26,534] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:33:26,535] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:33:26,535] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:33:26,536] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:33:26,537] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:33:26,537] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:33:26,538] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:33:26,538] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:33:26,585] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:26,589] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:33:26,727] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:33:26,880] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:27,180] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:27,240] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:27,240] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:27,297] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:33:27,467] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:33:27,468] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:33:27,481] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:27,481] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:33:32,486] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:33:32,504] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:32,505] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:33:32,505] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:32,565] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3988307 [2024-02-01 21:33:32,566] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:33:32,571] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:33:32,710] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:33:32,908] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:33,228] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:33,270] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:33:33,411] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:33:33,545] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:33,860] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:33,916] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:34,085] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:33:34,085] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:33:36,266] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:36,267] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:36,284] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:33:36,284] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:33:36,291] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:36,292] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:33:36,343] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:36,347] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:33:36,491] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:33:36,631] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:36,952] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:37,010] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:37,010] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:37,069] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:33:37,250] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:33:37,251] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:33:37,263] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:37,265] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:33:42,272] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:33:42,287] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:42,288] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:33:42,288] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:42,344] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3338096 [2024-02-01 21:33:42,345] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:33:42,348] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:33:42,527] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:33:42,731] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:43,048] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:43,089] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:33:43,267] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:33:43,506] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:43,824] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:43,885] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:44,028] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:33:44,029] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:33:45,451] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:45,452] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:45,468] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:33:45,470] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:33:45,477] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:45,477] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:33:45,530] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:45,535] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:33:45,674] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:33:45,818] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:33:46,138] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:33:46,198] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:33:46,198] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:46,256] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:33:46,435] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:33:46,436] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:33:46,453] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:46,454] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:33:51,653] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:33:52,141] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:33:52,142] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:33:52,240] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:33:52,257] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3340641 [2024-02-01 21:33:52,258] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:33:52,274] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:33:52,323] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:33:52,335] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:33:52,344] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:33:55,349] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:34:01,367] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:34:02,137] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:34:02,223] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:34:02,236] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15808339968, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:34:02,240] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:34:02,242] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:34:02,253] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:34:02,255] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:34:02,294] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:34:02,295] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:34:02,451] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:34:02,507] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:34:02,688] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:34:02,840] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:34:03,166] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:34:03,168] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:34:03,239] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:34:03,240] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:34:03,256] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:03,269] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:03,285] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:34:03,344] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:34:03,345] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:34:03,350] - [task:166] INFO - {'uptime': '34', 'memoryTotal': 16747913216, 'memoryFree': 15773659136, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:34:03,354] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:34:03,355] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:34:03,364] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:34:03,364] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:34:03,521] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:34:03,524] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:34:03,700] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:34:03,862] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:34:04,177] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:34:04,178] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:34:04,250] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:34:04,251] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:34:04,267] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:04,281] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:04,297] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:34:04,353] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:34:04,354] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:34:04,360] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15745626112, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:34:04,363] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:34:04,364] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:34:04,372] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:34:04,373] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:34:04,533] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:34:04,536] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:34:04,708] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:34:04,857] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:34:05,133] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:34:05,136] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:34:05,218] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:34:05,218] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:34:05,237] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:05,252] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:05,270] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:34:05,328] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:34:05,329] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:34:05,335] - [task:166] INFO - {'uptime': '15', 'memoryTotal': 16747917312, 'memoryFree': 15743676416, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:34:05,339] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:34:05,341] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:34:05,349] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:34:05,350] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:34:05,509] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:34:05,512] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:34:05,654] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:34:05,793] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:34:06,079] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:34:06,081] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:34:06,152] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:34:06,152] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:34:06,167] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:06,181] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:34:06,196] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:34:06,245] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:34:06,310] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:34:06,323] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:34:06,536] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:34:11,540] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:34:11,599] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:34:11,629] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:34:12,253] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:34:12,287] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:34:22,337] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.05 seconds [2024-02-01 21:34:22,339] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:34:36,957] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:34:36,995] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:34:47,028] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.03 seconds [2024-02-01 21:34:47,029] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:35:01,292] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:35:01,292] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:35:01,323] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:35:11,465] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:35:21,492] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:35:21,530] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:35:21,530] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852111463, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = df8e98187498d94f6621809d525d2fda', 'serverTime': '2024-02-01T21:35:11.463Z'} [2024-02-01 21:35:21,531] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706852111432, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:35:11.432Z'} [2024-02-01 21:35:21,531] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852111417, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = df8e98187498d94f6621809d525d2fda", 'serverTime': '2024-02-01T21:35:11.417Z'} [2024-02-01 21:35:21,531] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706852111271, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:35:11.271Z'} [2024-02-01 21:35:21,532] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852111267, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:35:11.267Z'} [2024-02-01 21:35:21,532] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706852101484, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:35:01.484Z'} [2024-02-01 21:35:21,532] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706852101268, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:35:01.268Z'} [2024-02-01 21:35:21,533] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852101254, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:35:01.254Z'} [2024-02-01 21:35:21,533] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706852101233, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:35:01.233Z'} [2024-02-01 21:35:21,533] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706852097972, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:34:57.972Z'} [, , , , , ] Thu Feb 1 21:35:21 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [, , , ] Thu Feb 1 21:35:21 2024 [2024-02-01 21:35:21,592] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:35:21,595] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:35:21,598] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:35:21,603] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:35:21,744] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:35:21,772] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:35:21,778] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:35:21,784] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:35:21,950] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:35:21,995] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:35:22,003] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:35:22,006] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:35:22,294] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:35:22,295] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2135-diag.zip [2024-02-01 21:35:22,313] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:35:22,315] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2135-diag.zip [2024-02-01 21:35:22,325] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:35:22,327] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2135-diag.zip [2024-02-01 21:35:22,335] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:35:22,337] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2135-diag.zip [2024-02-01 21:37:12,639] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:37:12,816] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2135-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:37:13,241] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2135-diag.zip [2024-02-01 21:37:13,290] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:37:13,819] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:37:14,000] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2135-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:37:14,442] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2135-diag.zip [2024-02-01 21:37:14,491] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:37:48,075] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:37:48,159] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2135-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:37:48,466] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2135-diag.zip [2024-02-01 21:37:48,517] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:13,130] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:13,321] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2135-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:38:13,725] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2135-diag.zip [2024-02-01 21:38:13,776] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 19 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_19 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 148.653s FAILED (failures=1) test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_20 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=5000,num_items_in_collection=10000000,num_scopes=1,num_collections=2,percent_update=30,percent_delete=10,system_failure=disk_full,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=10,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '2', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'disk_full', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '10', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 20, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_20'} [2024-02-01 21:38:13,848] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:38:13,994] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:38:14,151] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:14,468] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:14,490] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:38:14,532] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:38:14,533] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #20 test_system_failure_create_drop_indexes_simple============== [2024-02-01 21:38:14,533] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:38:14,562] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:38:14,562] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:38:14,592] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:38:14,593] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:38:14,593] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:38:14,621] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:38:14,621] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4749999940395355, 'mem_free': 15738474496, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:38:14,622] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3999999910593033, 'mem_free': 15737847808, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:38:14,622] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.899999987334013, 'mem_free': 15520751616, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:38:14,622] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:38:14,629] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:38:14,774] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:38:14,907] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:15,177] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:15,182] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:38:15,317] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:38:15,458] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:15,770] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:15,775] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:38:15,912] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:38:16,061] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:16,337] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:16,346] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:38:16,483] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:38:16,643] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:16,961] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:23,846] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:38:23,846] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:38:23,893] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:38:23,928] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:38:23,928] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:38:23,959] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:38:23,960] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:38:34,286] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:38:34,293] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:38:34,435] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:38:34,637] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:34,948] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:34,980] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:38:35,078] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:38:35,223] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:35,534] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:35,594] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:35,720] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:38:35,721] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:38:36,990] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:36,991] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:37,012] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:38:37,014] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:38:37,022] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:37,022] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:38:37,073] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:37,081] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:38:37,219] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:38:37,362] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:37,671] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:37,734] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:37,736] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:37,800] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:38:37,931] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:38:37,931] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:38:37,942] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:37,942] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:38:42,948] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:38:43,037] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:43,039] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:38:43,039] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:43,096] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2885605 [2024-02-01 21:38:43,096] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:38:43,099] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:38:43,231] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:38:43,432] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:43,749] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:43,790] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:38:43,932] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:38:44,076] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:44,388] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:44,451] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:44,625] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:38:44,626] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:38:46,871] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:46,871] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:46,883] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:38:46,884] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:38:46,937] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:38:46,938] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:38:46,939] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:38:46,984] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:46,986] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:38:47,115] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:38:47,244] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:47,504] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:47,561] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:47,562] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:47,617] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:38:47,794] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:38:47,794] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:38:47,808] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:47,809] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:38:52,815] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:38:52,832] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:52,833] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:38:52,833] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:52,891] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3993686 [2024-02-01 21:38:52,891] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:38:52,895] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:38:53,039] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:38:53,239] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:53,558] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:53,600] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:38:53,738] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:38:53,890] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:54,166] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:54,224] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:54,343] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:38:54,343] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:38:56,671] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:56,672] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:56,690] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:38:56,692] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:38:56,699] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:56,700] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:38:56,750] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:56,755] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:38:56,895] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:38:57,042] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:38:57,355] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:38:57,415] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:38:57,416] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:38:57,474] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:38:57,651] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:38:57,651] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:38:57,663] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:38:57,664] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:39:02,670] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:39:02,685] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:02,686] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:39:02,686] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:39:02,742] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3343417 [2024-02-01 21:39:02,743] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:39:02,750] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:39:04,102] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:39:04,317] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:04,640] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:04,679] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:39:04,855] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:39:04,993] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:05,309] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:05,374] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:39:05,553] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:39:05,554] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:39:06,843] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:06,844] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:39:06,902] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:39:06,903] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:39:06,910] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:06,911] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:39:06,961] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:06,965] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:39:07,143] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:39:07,269] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:07,534] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:07,590] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:39:07,591] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:39:07,646] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:39:07,765] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:39:07,765] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:39:07,778] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:07,779] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:39:12,784] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:39:12,799] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:12,800] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:39:12,801] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:39:12,863] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3345821 [2024-02-01 21:39:12,864] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:39:12,871] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:39:12,883] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:39:12,896] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:39:12,909] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:39:15,914] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:39:21,925] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:39:21,996] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:39:21,997] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:39:22,002] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15801835520, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:39:22,005] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:39:22,006] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:39:22,015] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:39:22,016] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:39:22,052] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:39:22,053] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:39:22,208] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:39:22,212] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:39:22,384] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:39:22,521] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:22,854] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:22,856] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:39:22,927] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:22,928] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:39:22,945] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:22,959] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:22,975] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:39:23,031] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:39:23,032] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:39:23,037] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15763595264, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:39:23,043] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:39:23,044] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:39:23,053] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:39:23,054] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:39:23,204] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:39:23,207] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:39:23,346] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:39:23,485] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:23,809] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:23,811] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:39:23,882] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:23,883] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:39:23,900] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:23,914] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:23,931] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:39:23,983] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:39:23,984] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:39:23,988] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15758446592, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:39:23,992] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:39:23,993] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:39:24,002] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:39:24,003] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:39:24,146] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:39:24,150] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:39:24,290] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:39:24,430] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:24,745] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:24,747] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:39:24,815] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:24,817] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:39:24,834] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:24,849] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:24,866] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:39:24,924] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:39:24,925] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:39:24,930] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15728680960, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:39:24,934] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:39:24,935] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:39:24,944] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:39:24,944] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:39:25,094] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:39:25,098] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:39:25,237] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:39:25,377] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:39:25,651] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:39:25,653] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:39:25,724] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:39:25,725] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:39:25,741] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:25,755] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:39:25,772] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:39:25,824] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:39:25,886] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:39:25,887] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:39:26,079] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:39:31,084] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:39:31,134] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:39:31,168] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:39:31,831] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:39:31,863] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:39:41,903] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:39:41,903] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:39:56,010] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:39:56,045] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:40:06,087] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:40:06,087] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:40:20,259] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:40:20,260] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:40:20,291] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:40:30,427] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:40:40,479] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:40:40,653] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:40:40,653] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852430425, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = dec011c59f7822dd30b24b72b5149478', 'serverTime': '2024-02-01T21:40:30.425Z'} [2024-02-01 21:40:40,654] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706852430394, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:40:30.394Z'} [2024-02-01 21:40:40,654] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852430379, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = dec011c59f7822dd30b24b72b5149478", 'serverTime': '2024-02-01T21:40:30.379Z'} [2024-02-01 21:40:40,655] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706852430233, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:40:30.233Z'} [2024-02-01 21:40:40,655] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852430228, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:40:30.228Z'} [2024-02-01 21:40:40,655] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706852420450, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:40:20.450Z'} [2024-02-01 21:40:40,656] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706852420228, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:40:20.228Z'} [2024-02-01 21:40:40,656] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852420215, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:40:20.215Z'} [2024-02-01 21:40:40,656] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706852420186, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:40:20.186Z'} [2024-02-01 21:40:40,657] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706852416951, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:40:16.951Z'} [, , , , , ] Thu Feb 1 21:40:40 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [, , , ] Thu Feb 1 21:40:40 2024 [2024-02-01 21:40:40,783] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:40:40,786] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:40:40,793] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:40:40,794] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:40:42,263] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:40:42,267] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:40:42,291] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:40:42,332] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:40:42,674] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:40:42,690] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:40:42,693] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:40:42,718] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:40:43,015] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:40:43,026] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2140-diag.zip [2024-02-01 21:40:43,028] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:40:43,029] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2140-diag.zip [2024-02-01 21:40:43,037] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:40:43,042] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:40:43,042] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2140-diag.zip Collecting logs from 172.23.123.206 [2024-02-01 21:40:43,045] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2140-diag.zip [2024-02-01 21:42:33,139] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:42:33,324] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2140-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:42:33,897] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2140-diag.zip [2024-02-01 21:42:33,948] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:42:34,440] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:42:34,631] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2140-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:42:35,096] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2140-diag.zip [2024-02-01 21:42:35,145] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:05,309] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:05,490] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2140-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:43:05,867] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2140-diag.zip [2024-02-01 21:43:05,917] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:33,985] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:34,161] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2140-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:43:34,498] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2140-diag.zip [2024-02-01 21:43:34,546] - [remote_util:3401] INFO - command executed successfully with root summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 20 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_20 Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_system_failure_create_drop_indexes_simple (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 146.890s FAILED (failures=1) test_shard_json_corruption (gsi.collections_plasma.PlasmaCollectionsTests) ... Logs will be stored at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_21 ./testrunner -i /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini -p bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 -t gsi.collections_plasma.PlasmaCollectionsTests.test_shard_json_corruption,default_bucket=false,defer_build=False,java_sdk_client=True,nodes_init=4,services_init=kv:n1ql-kv:n1ql-index,all_collections=True,bucket_size=1000,num_items_in_collection=10000000,num_scopes=1,num_collections=1,percent_update=30,percent_delete=10,system_failure=shard_json_corruption,moi_snapshot_interval=150000,skip_cleanup=True,num_pre_indexes=1,num_of_indexes=1,GROUP=SIMPLE,simple_create_index=True Test Input params: {'default_bucket': 'false', 'defer_build': 'False', 'java_sdk_client': 'True', 'nodes_init': '3', 'services_init': 'kv:n1ql-kv:n1ql-index', 'all_collections': 'True', 'bucket_size': '5000', 'num_items_in_collection': '10000000', 'num_scopes': '1', 'num_collections': '1', 'percent_update': '30', 'percent_delete': '10', 'system_failure': 'shard_json_corruption', 'moi_snapshot_interval': '150000', 'skip_cleanup': 'True', 'num_pre_indexes': '1', 'num_of_indexes': '1', 'GROUP': 'SIMPLE', 'simple_create_index': 'True', 'ini': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/testexec.25952.ini', 'cluster_name': 'testexec.25952', 'spec': 'py-gsi-plasma', 'conf_file': 'conf/gsi/py-gsi-plasma.conf', 'reset_services': 'True', 'test_timeout': '240', 'get-cbcollect-info': 'True', 'exclude_keywords': 'messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*', 'sirius_url': 'http://172.23.120.103:4000', 'num_nodes': 4, 'case_number': 21, 'total_testcases': 21, 'last_case_fail': 'True', 'teardown_run': 'False', 'logs_folder': '/data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_21'} [2024-02-01 21:43:34,718] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:43:34,819] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:43:34,958] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:35,271] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:35,301] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:43:35,350] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:43:35,351] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #21 test_shard_json_corruption============== [2024-02-01 21:43:35,352] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:43:35,381] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:43:35,382] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:43:35,413] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:43:35,414] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:43:35,415] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:43:35,444] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:43:35,444] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4249999858438969, 'mem_free': 15749812224, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:43:35,444] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.4749999940395355, 'mem_free': 15741476864, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:43:35,445] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 4.400000013411045, 'mem_free': 15528341504, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:43:35,445] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:43:35,456] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:43:35,587] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:43:35,732] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:36,042] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:36,049] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:43:36,150] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:43:36,275] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:36,584] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:36,587] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:43:36,687] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:43:36,827] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:37,142] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:37,146] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:43:37,293] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:43:37,432] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:38,026] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:44,769] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:43:44,770] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:43:44,912] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:43:44,947] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:43:44,948] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:43:44,981] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:43:44,983] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:43:55,624] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:43:55,636] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:43:55,779] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:43:55,981] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:56,297] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:56,344] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:43:56,529] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:43:56,672] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:56,990] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:57,050] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:43:57,185] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:43:57,186] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:43:58,597] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:58,599] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:43:58,615] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:43:58,616] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:43:58,625] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:58,625] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:43:58,675] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:58,679] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:43:58,856] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:43:58,994] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:43:59,315] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:43:59,375] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:43:59,376] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:43:59,433] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:43:59,567] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:43:59,567] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:43:59,581] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:43:59,583] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:44:04,588] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:44:04,604] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:04,604] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:44:04,605] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:04,664] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2891097 [2024-02-01 21:44:04,664] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:44:04,669] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:44:04,840] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:44:05,044] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:05,313] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:05,357] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:44:05,540] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:44:05,685] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:05,994] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:06,055] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:06,187] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:44:06,187] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:44:08,480] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:08,481] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:08,498] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:44:08,500] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:44:08,553] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:44:08,554] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:44:08,554] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:44:08,554] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:44:08,556] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:44:08,556] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:44:08,557] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:44:08,557] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:44:08,606] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:08,612] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:44:08,784] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:44:08,923] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:09,191] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:09,251] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:09,252] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:09,310] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:44:09,488] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:44:09,489] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:44:09,502] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:09,503] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:44:14,508] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:44:14,526] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:14,526] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:44:14,527] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:14,581] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 3999066 [2024-02-01 21:44:14,581] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:44:14,586] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:44:14,727] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:44:14,917] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:15,173] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:15,210] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:44:15,362] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:44:15,498] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:15,813] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:15,875] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:16,058] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:44:16,058] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:44:18,273] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:18,273] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:18,284] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:44:18,284] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:44:18,290] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:18,290] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:44:18,344] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:18,348] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:44:18,475] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:44:18,598] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:18,867] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:18,929] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:18,931] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:18,991] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:44:19,083] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:44:19,083] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:44:19,096] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:19,097] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:44:24,102] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:44:24,116] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:24,116] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:44:24,117] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:24,171] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3348719 [2024-02-01 21:44:24,172] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:44:24,176] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:44:24,354] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:44:24,551] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:24,819] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:24,853] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:44:24,988] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:44:25,119] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:25,375] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:25,428] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:25,533] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:44:25,533] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:44:26,847] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:26,847] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:26,864] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:44:26,865] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:44:26,873] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:26,873] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:44:26,923] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:26,927] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:44:27,099] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:44:27,240] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:27,512] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:27,570] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:44:27,570] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:27,630] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:44:27,814] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:44:27,815] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:44:27,828] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:27,828] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:44:32,834] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:44:32,846] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:32,846] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:44:32,846] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:44:32,905] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3351025 [2024-02-01 21:44:32,906] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:44:32,910] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:44:32,987] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:44:32,998] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:44:33,006] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:44:36,011] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:44:42,025] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:44:43,025] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:44:43,029] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:44:43,036] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15809806336, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:44:43,039] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:44:43,042] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:44:43,049] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:44:43,050] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:44:43,085] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:44:43,085] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:44:43,230] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:44:43,233] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:44:43,379] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:44:43,513] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:43,802] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:43,805] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:44:43,871] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:43,871] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:44:43,887] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:43,901] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:43,916] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:44:43,970] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:44:43,971] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:44:43,976] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15766085632, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:44:43,980] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:44:43,981] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:44:43,994] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:44:43,994] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:44:44,141] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:44:44,142] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:44:44,243] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:44:44,390] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:44,704] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:44,706] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:44:44,774] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:44,775] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:44:44,793] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:44,808] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:44,825] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:44:44,881] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:44:44,883] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:44:44,887] - [task:166] INFO - {'uptime': '24', 'memoryTotal': 16747917312, 'memoryFree': 15775916032, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:44:44,891] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:44:44,892] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:44:44,900] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:44:44,901] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:44:45,064] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:44:45,067] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:44:45,240] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:44:45,385] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:45,694] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:45,696] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:44:45,768] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:45,769] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:44:45,785] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:45,801] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:45,817] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:44:45,875] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:44:45,876] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:44:45,881] - [task:166] INFO - {'uptime': '14', 'memoryTotal': 16747917312, 'memoryFree': 15733911552, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:44:45,884] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:44:45,885] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:44:45,892] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:44:45,893] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:44:46,048] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:44:46,051] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:44:46,193] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:44:46,337] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:44:46,611] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:44:46,613] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:44:46,687] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:44:46,688] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:44:46,704] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:46,718] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:44:46,733] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:44:46,783] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:44:46,846] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:44:46,863] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:44:47,065] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:44:52,071] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:44:52,123] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:44:52,159] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:44:52,787] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:44:52,820] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:45:02,863] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:45:02,864] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:45:17,448] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:45:17,484] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:45:27,525] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:45:27,525] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:45:41,865] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:45:41,866] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:45:41,895] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:45:52,024] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:46:02,049] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:46:02,076] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:46:02,076] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852752022, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = 973d0e417e7f43e39d7fea09ee16bc6b', 'serverTime': '2024-02-01T21:45:52.022Z'} [2024-02-01 21:46:02,076] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706852751991, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:45:51.991Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706852751975, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = 973d0e417e7f43e39d7fea09ee16bc6b", 'serverTime': '2024-02-01T21:45:51.975Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706852751846, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:45:51.846Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852751841, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:45:51.841Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706852742062, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:45:42.062Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706852741841, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:45:41.841Z'} [2024-02-01 21:46:02,077] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706852741828, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:45:41.828Z'} [2024-02-01 21:46:02,078] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706852741806, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:45:41.806Z'} [2024-02-01 21:46:02,078] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706852738464, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:45:38.464Z'} [, , , , , ] Thu Feb 1 21:46:02 2024 [, , , , , , , , , , , , ] Cluster instance shutdown with force [, , , ] Thu Feb 1 21:46:02 2024 [2024-02-01 21:46:02,100] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:46:02,101] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:46:02,107] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:46:02,109] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:46:02,212] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:46:02,253] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:46:02,283] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:46:02,285] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:46:02,363] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:46:02,399] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:46:02,479] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:46:02,499] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:46:02,712] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.157 [2024-02-01 21:46:02,714] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: /opt/couchbase/bin/cbcollect_info 172.23.123.157-20240201-2146-diag.zip [2024-02-01 21:46:02,755] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.160 [2024-02-01 21:46:02,762] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: /opt/couchbase/bin/cbcollect_info 172.23.123.160-20240201-2146-diag.zip [2024-02-01 21:46:02,777] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.207 [2024-02-01 21:46:02,781] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: /opt/couchbase/bin/cbcollect_info 172.23.123.207-20240201-2146-diag.zip [2024-02-01 21:46:02,797] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 Collecting logs from 172.23.123.206 [2024-02-01 21:46:02,799] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: /opt/couchbase/bin/cbcollect_info 172.23.123.206-20240201-2146-diag.zip [2024-02-01 21:47:52,946] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:47:53,130] - [remote_util:1348] INFO - found the file /root/172.23.123.157-20240201-2146-diag.zip Downloading zipped logs from 172.23.123.157 [2024-02-01 21:47:53,609] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -f /root/172.23.123.157-20240201-2146-diag.zip [2024-02-01 21:47:53,660] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:47:54,306] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:47:54,476] - [remote_util:1348] INFO - found the file /root/172.23.123.206-20240201-2146-diag.zip Downloading zipped logs from 172.23.123.206 [2024-02-01 21:47:54,957] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -f /root/172.23.123.206-20240201-2146-diag.zip [2024-02-01 21:47:55,008] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:48:23,596] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:48:23,725] - [remote_util:1348] INFO - found the file /root/172.23.123.160-20240201-2146-diag.zip Downloading zipped logs from 172.23.123.160 [2024-02-01 21:48:24,087] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -f /root/172.23.123.160-20240201-2146-diag.zip [2024-02-01 21:48:24,137] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:48:53,476] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:48:53,650] - [remote_util:1348] INFO - found the file /root/172.23.123.207-20240201-2146-diag.zip Downloading zipped logs from 172.23.123.207 [2024-02-01 21:48:54,063] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -f /root/172.23.123.207-20240201-2146-diag.zip [2024-02-01 21:48:54,115] - [remote_util:3401] INFO - command executed successfully with root Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception File "lib/tasks/task.py", line 898, in check (status, progress) = self.rest._rebalance_status_and_progress() File "lib/membase/api/on_prem_rest_client.py", line 2080, in _rebalance_status_and_progress raise RebalanceFailedException(msg) membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed FAIL ====================================================================== FAIL: test_shard_json_corruption (gsi.collections_plasma.PlasmaCollectionsTests) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/basetestcase.py", line 374, in setUp services=self.services) File "lib/couchbase_helper/cluster.py", line 502, in rebalance return _task.result(timeout) File "lib/tasks/future.py", line 160, in result return self.__get_result() File "lib/tasks/future.py", line 112, in __get_result raise self._exception membase.api.exception.RebalanceFailedException: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/basetestcase.py", line 391, in setUp self.fail(e) File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/gsi/collections_plasma.py", line 111, in setUp super(PlasmaCollectionsTests, self).setUp() File "pytests/gsi/base_gsi.py", line 43, in setUp super(BaseSecondaryIndexingTests, self).setUp() File "pytests/gsi/newtuq.py", line 11, in setUp super(QueryTests, self).setUp() File "pytests/basetestcase.py", line 485, in setUp self.fail(e) AssertionError: Rebalance Failed: {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed ---------------------------------------------------------------------- Ran 1 test in 147.383s FAILED (failures=1) suite_tearDown (gsi.collections_plasma.PlasmaCollectionsTests) ... summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 21 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_shard_json_corruption testrunner logs, diags and results are available under /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_19-54-58/test_21 *** Tests executed count: 21 Run after suite setup for gsi.collections_plasma.PlasmaCollectionsTests.test_shard_json_corruption [2024-02-01 21:48:54,146] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:48:54,291] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:48:54,445] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:48:54,758] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:48:54,781] - [on_prem_rest_client:69] INFO - -->is_ns_server_running? [2024-02-01 21:48:54,828] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:48:54,829] - [basetestcase:156] INFO - ============== basetestcase setup was started for test #21 suite_tearDown============== [2024-02-01 21:48:54,829] - [collections_plasma:267] INFO - ============== PlasmaCollectionsTests tearDown has started ============== [2024-02-01 21:48:54,859] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:48:54,860] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:48:54,892] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:48:54,892] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:48:54,892] - [basetestcase:2701] INFO - cannot find service node index in cluster [2024-02-01 21:48:54,926] - [basetestcase:634] INFO - ------- Cluster statistics ------- [2024-02-01 21:48:54,927] - [basetestcase:636] INFO - 172.23.123.157:8091 => {'services': ['index'], 'cpu_utilization': 0.4124999977648258, 'mem_free': 15780429824, 'mem_total': 16747917312, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:48:54,927] - [basetestcase:636] INFO - 172.23.123.206:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 0.3175635261162597, 'mem_free': 15758360576, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:48:54,928] - [basetestcase:636] INFO - 172.23.123.207:8091 => {'services': ['kv', 'n1ql'], 'cpu_utilization': 3.937500007450581, 'mem_free': 15536324608, 'mem_total': 16747913216, 'swap_mem_used': 0, 'swap_mem_total': 1027600384} [2024-02-01 21:48:54,928] - [basetestcase:637] INFO - --- End of cluster statistics --- [2024-02-01 21:48:54,934] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:48:55,083] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:48:55,227] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:48:55,546] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:48:55,551] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:48:55,690] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:48:55,831] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:48:56,105] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:48:56,111] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:48:56,253] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:48:56,401] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:48:56,727] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:48:56,734] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:48:56,915] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:48:57,072] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:48:57,349] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:04,222] - [basetestcase:729] WARNING - CLEANUP WAS SKIPPED [2024-02-01 21:49:04,223] - [basetestcase:806] INFO - closing all ssh connections [2024-02-01 21:49:04,228] - [basetestcase:811] INFO - closing all memcached connections Cluster instance shutdown with force [2024-02-01 21:49:04,263] - [collections_plasma:272] INFO - 'PlasmaCollectionsTests' object has no attribute 'index_nodes' [2024-02-01 21:49:04,263] - [collections_plasma:273] INFO - ============== PlasmaCollectionsTests tearDown has completed ============== [2024-02-01 21:49:04,294] - [on_prem_rest_client:3587] INFO - Update internal setting magmaMinMemoryQuota=256 [2024-02-01 21:49:04,296] - [basetestcase:199] INFO - Building docker image with java sdk client OpenJDK 64-Bit Server VM warning: ignoring option MaxPermSize=512m; support was removed in 8.0 [2024-02-01 21:49:15,060] - [basetestcase:229] INFO - initializing cluster [2024-02-01 21:49:15,066] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:49:15,244] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:49:15,449] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:15,761] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:15,801] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:49:15,980] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:49:16,122] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:16,435] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:16,498] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:16,681] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:49:16,682] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl stop couchbase-server.service [2024-02-01 21:49:18,004] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:18,005] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:18,022] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:49:18,022] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:49:18,030] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:18,031] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:49:18,085] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:18,089] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:49:18,191] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:49:18,330] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:18,645] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:18,702] - [remote_util:966] INFO - 172.23.123.207 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:18,704] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:18,757] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:49:18,932] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:49:18,933] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl start couchbase-server.service [2024-02-01 21:49:18,945] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:18,945] - [remote_util:347] INFO - 172.23.123.207:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:49:23,950] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:49:23,964] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:23,965] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:49:23,966] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:24,024] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.207: with pid 2896594 [2024-02-01 21:49:24,025] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:49:24,029] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:49:24,175] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:49:24,376] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:24,686] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:24,731] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:49:24,831] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:49:24,977] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:25,296] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:25,365] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:25,548] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:49:25,548] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl stop couchbase-server.service [2024-02-01 21:49:27,867] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:27,868] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:27,887] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:49:27,889] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:49:27,944] - [remote_util:3399] INFO - command executed with root but got an error ["rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty", "rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty", "rm: cannot remove '/opt/c ... [2024-02-01 21:49:27,946] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard11012757916338547820': Directory not empty [2024-02-01 21:49:27,946] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/shards/shard9204245758483166631': Directory not empty [2024-02-01 21:49:27,946] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_#primary_17429042892267827000_0.index': Directory not empty [2024-02-01 21:49:27,947] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/indexstats': Directory not empty [2024-02-01 21:49:27,947] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/@2i/test_bucket_idx_test_scope_1_test_collection_1job_title0_906951289603245903_0.index': Directory not empty [2024-02-01 21:49:27,947] - [remote_util:3132] ERROR - rm: cannot remove '/opt/couchbase/var/lib/couchbase/data/lost+found': Directory not empty [2024-02-01 21:49:27,948] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:49:27,994] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:28,002] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:49:28,139] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:49:28,280] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:28,560] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:28,619] - [remote_util:966] INFO - 172.23.123.206 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:28,622] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:28,678] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:49:28,853] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:49:28,854] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl start couchbase-server.service [2024-02-01 21:49:28,868] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:28,868] - [remote_util:347] INFO - 172.23.123.206:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:49:33,873] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:49:33,892] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:33,893] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:49:33,894] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:33,949] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.206: with pid 4004464 [2024-02-01 21:49:33,951] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:49:33,955] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:49:34,056] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:49:34,259] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:34,575] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:34,620] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:49:34,760] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:49:34,903] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:35,209] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:35,274] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:35,451] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:49:35,451] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl stop couchbase-server.service [2024-02-01 21:49:37,678] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:37,681] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:37,698] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:49:37,699] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:49:37,708] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:37,708] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:49:37,759] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:37,763] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:49:37,863] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:49:38,002] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:38,317] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:38,376] - [remote_util:966] INFO - 172.23.123.157 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:38,376] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:38,436] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:49:38,610] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:49:38,611] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl start couchbase-server.service [2024-02-01 21:49:38,624] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:38,624] - [remote_util:347] INFO - 172.23.123.157:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:49:43,630] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:49:43,645] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:43,645] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:49:43,647] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:43,706] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.157: with pid 3354021 [2024-02-01 21:49:43,707] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:49:43,712] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:49:44,407] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:49:44,609] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:44,921] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:44,963] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:49:45,105] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:49:45,253] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:45,526] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:45,589] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:45,731] - [remote_util:3942] INFO - Running systemd command on this server [2024-02-01 21:49:45,731] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl stop couchbase-server.service [2024-02-01 21:49:47,049] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:47,049] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:47,065] - [basetestcase:2534] INFO - Couchbase stopped [2024-02-01 21:49:47,066] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/data/* [2024-02-01 21:49:47,073] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:47,075] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: rm -rf /opt/couchbase/var/lib/couchbase/config/* [2024-02-01 21:49:47,129] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:47,134] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:49:47,279] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:49:47,422] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:49:47,742] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:49:47,805] - [remote_util:966] INFO - 172.23.123.160 **** The linux version file /opt/couchbase/ VERSION.txt exists [2024-02-01 21:49:47,806] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:47,866] - [remote_util:3961] INFO - Starting couchbase server [2024-02-01 21:49:48,050] - [remote_util:3982] INFO - Running systemd command on this server [2024-02-01 21:49:48,051] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl start couchbase-server.service [2024-02-01 21:49:48,063] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:48,064] - [remote_util:347] INFO - 172.23.123.160:sleep for 5 secs. waiting for couchbase server to come up ... [2024-02-01 21:49:53,070] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: systemctl status couchbase-server.service | grep ExecStop=/opt/couchbase/bin/couchbase-server [2024-02-01 21:49:53,086] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:49:53,086] - [remote_util:3987] INFO - Couchbase server status: [] [2024-02-01 21:49:53,086] - [remote_util:150] INFO - Checking for process beam.smp on linux [2024-02-01 21:49:53,146] - [remote_util:169] INFO - process beam.smp is running on 172.23.123.160: with pid 3356212 [2024-02-01 21:49:53,146] - [basetestcase:2548] INFO - Couchbase started [2024-02-01 21:49:53,151] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:49:53,175] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:49:53,187] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:49:53,197] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:49:56,202] - [on_prem_rest_client:1135] ERROR - socket error while connecting to http://172.23.123.160:8091/pools/default error [Errno 111] Connection refused [2024-02-01 21:50:02,211] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:50:02,332] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.207:8091/pools/default with status False: unknown pool [2024-02-01 21:50:02,332] - [task:161] INFO - server: ip:172.23.123.207 port:8091 ssh_username:root, nodes/self [2024-02-01 21:50:02,338] - [task:166] INFO - {'uptime': '39', 'memoryTotal': 16747913216, 'memoryFree': 15799263232, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.207:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.207', 'ip': '172.23.123.207', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:50:02,340] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.207:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:50:02,340] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:50:02,345] - [on_prem_rest_client:1267] INFO - --> init_node_services(Administrator,password,172.23.123.207,8091,['kv', 'n1ql']) [2024-02-01 21:50:02,345] - [on_prem_rest_client:1283] INFO - node/controller/setupServices params on 172.23.123.207: 8091:hostname=172.23.123.207&user=Administrator&password=password&services=kv%2Cn1ql [2024-02-01 21:50:02,379] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:50:02,379] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.207:8091:port=8091&username=Administrator&password=password [2024-02-01 21:50:02,523] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:50:02,525] - [remote_util:308] INFO - SSH Connecting to 172.23.123.207 with username:root, attempt#1 of 5 [2024-02-01 21:50:02,667] - [remote_util:344] INFO - SSH Connected to 172.23.123.207 as root [2024-02-01 21:50:02,822] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:50:03,149] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:50:03,153] - [remote_util:3352] INFO - running command.raw on 172.23.123.207: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:50:03,222] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:50:03,222] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:50:03,239] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:03,253] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.207:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:03,269] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:50:03,325] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.206:8091/pools/default with status False: unknown pool [2024-02-01 21:50:03,326] - [task:161] INFO - server: ip:172.23.123.206 port:8091 ssh_username:root, nodes/self [2024-02-01 21:50:03,331] - [task:166] INFO - {'uptime': '29', 'memoryTotal': 16747913216, 'memoryFree': 15779860480, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.206:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.206', 'ip': '172.23.123.206', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:50:03,335] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.206:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:50:03,336] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:50:03,344] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:50:03,344] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.206:8091:port=8091&username=Administrator&password=password [2024-02-01 21:50:03,494] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:50:03,497] - [remote_util:308] INFO - SSH Connecting to 172.23.123.206 with username:root, attempt#1 of 5 [2024-02-01 21:50:03,631] - [remote_util:344] INFO - SSH Connected to 172.23.123.206 as root [2024-02-01 21:50:03,763] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:50:04,072] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:50:04,075] - [remote_util:3352] INFO - running command.raw on 172.23.123.206: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:50:04,143] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:50:04,143] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:50:04,159] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:04,173] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.206:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:04,188] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:50:04,243] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.157:8091/pools/default with status False: unknown pool [2024-02-01 21:50:04,244] - [task:161] INFO - server: ip:172.23.123.157 port:8091 ssh_username:root, nodes/self [2024-02-01 21:50:04,248] - [task:166] INFO - {'uptime': '19', 'memoryTotal': 16747917312, 'memoryFree': 15785701376, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.157:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.157', 'ip': '172.23.123.157', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:50:04,251] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.157:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:50:04,252] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:50:04,259] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:50:04,259] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.157:8091:port=8091&username=Administrator&password=password [2024-02-01 21:50:04,396] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:50:04,399] - [remote_util:308] INFO - SSH Connecting to 172.23.123.157 with username:root, attempt#1 of 5 [2024-02-01 21:50:04,552] - [remote_util:344] INFO - SSH Connected to 172.23.123.157 as root [2024-02-01 21:50:04,687] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:50:05,002] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:50:05,004] - [remote_util:3352] INFO - running command.raw on 172.23.123.157: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:50:05,074] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:50:05,075] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:50:05,090] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:05,104] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.157:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:05,119] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:50:05,179] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password http://172.23.123.160:8091/pools/default with status False: unknown pool [2024-02-01 21:50:05,181] - [task:161] INFO - server: ip:172.23.123.160 port:8091 ssh_username:root, nodes/self [2024-02-01 21:50:05,186] - [task:166] INFO - {'uptime': '15', 'memoryTotal': 16747917312, 'memoryFree': 15730192384, 'mcdMemoryReserved': 12777, 'mcdMemoryAllocated': 12777, 'status': 'healthy', 'hostname': '172.23.123.160:8091', 'clusterCompatibility': 458758, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.6.0-2090-enterprise', 'os': 'x86_64-pc-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 7474, 'memcached': 11210, 'id': 'ns_1@172.23.123.160', 'ip': '172.23.123.160', 'internal_ip': '', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 15972, 'curr_items': 0} [2024-02-01 21:50:05,190] - [on_prem_rest_client:1130] ERROR - GET http://172.23.123.160:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"unknown pool"' auth: Administrator:password [2024-02-01 21:50:05,190] - [on_prem_rest_client:1307] INFO - pools/default params : memoryQuota=8560 [2024-02-01 21:50:05,199] - [on_prem_rest_client:1203] INFO - --> in init_cluster...Administrator,password,8091 [2024-02-01 21:50:05,199] - [on_prem_rest_client:1208] INFO - settings/web params on 172.23.123.160:8091:port=8091&username=Administrator&password=password [2024-02-01 21:50:05,351] - [on_prem_rest_client:1210] INFO - --> status:True [2024-02-01 21:50:05,354] - [remote_util:308] INFO - SSH Connecting to 172.23.123.160 with username:root, attempt#1 of 5 [2024-02-01 21:50:05,535] - [remote_util:344] INFO - SSH Connected to 172.23.123.160 as root [2024-02-01 21:50:05,676] - [remote_util:3516] INFO - os_distro: Ubuntu, os_version: debian 11, is_linux_distro: True [2024-02-01 21:50:05,983] - [remote_util:3685] INFO - extract_remote_info-->distribution_type: Ubuntu, distribution_version: debian 11 [2024-02-01 21:50:05,986] - [remote_util:3352] INFO - running command.raw on 172.23.123.160: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2024-02-01 21:50:06,055] - [remote_util:3401] INFO - command executed successfully with root [2024-02-01 21:50:06,055] - [remote_util:5237] INFO - ['ok'] [2024-02-01 21:50:06,072] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:06,086] - [on_prem_rest_client:1949] INFO - diag/eval status on 172.23.123.160:8091: True content: [7,6] command: cluster_compat_mode:get_compat_version(). [2024-02-01 21:50:06,103] - [on_prem_rest_client:1344] INFO - settings/indexes params : storageMode=plasma [2024-02-01 21:50:06,151] - [basetestcase:2455] INFO - **** add built-in 'cbadminbucket' user to node 172.23.123.207 **** [2024-02-01 21:50:06,220] - [on_prem_rest_client:1130] ERROR - DELETE http://172.23.123.207:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2024-02-01 21:50:06,222] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2024-02-01 21:50:06,427] - [basetestcase:904] INFO - sleep for 5 secs. ... [2024-02-01 21:50:11,432] - [basetestcase:2460] INFO - **** add 'admin' role to 'cbadminbucket' user **** [2024-02-01 21:50:11,481] - [basetestcase:267] INFO - done initializing cluster [2024-02-01 21:50:11,512] - [on_prem_rest_client:2883] INFO - Node version in cluster 7.6.0-2090-enterprise [2024-02-01 21:50:12,158] - [task:829] INFO - adding node 172.23.123.206:8091 to cluster [2024-02-01 21:50:12,190] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.206:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:50:22,232] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:50:22,233] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:50:36,409] - [task:829] INFO - adding node 172.23.123.157:8091 to cluster [2024-02-01 21:50:36,443] - [on_prem_rest_client:1694] INFO - adding remote node @172.23.123.157:18091 to this cluster @172.23.123.207:8091 [2024-02-01 21:50:46,482] - [on_prem_rest_client:2032] INFO - rebalance progress took 10.04 seconds [2024-02-01 21:50:46,482] - [on_prem_rest_client:2033] INFO - sleep for 10 seconds after rebalance... [2024-02-01 21:51:00,470] - [on_prem_rest_client:2867] INFO - Node 172.23.123.157 not part of cluster inactiveAdded [2024-02-01 21:51:00,471] - [on_prem_rest_client:2867] INFO - Node 172.23.123.206 not part of cluster inactiveAdded [2024-02-01 21:51:00,502] - [on_prem_rest_client:1926] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.123.157,ns_1@172.23.123.206,ns_1@172.23.123.207', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2024-02-01 21:51:10,631] - [on_prem_rest_client:1931] INFO - rebalance operation started [2024-02-01 21:51:20,658] - [on_prem_rest_client:2078] ERROR - {'status': 'none', 'errorMessage': 'Rebalance failed. See logs for detailed reason. You can try again.'} - rebalance failed [2024-02-01 21:51:20,682] - [on_prem_rest_client:4325] INFO - Latest logs from UI on 172.23.123.207: [2024-02-01 21:51:20,683] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706853070629, 'shortText': 'message', 'text': 'Rebalance exited with reason {{badmatch,\n {old_indexes_cleanup_failed,\n [{\'ns_1@172.23.123.206\',{error,eexist}}]}},\n [{ns_rebalancer,rebalance_body,7,\n [{file,"src/ns_rebalancer.erl"},{line,470}]},\n {async,\'-async_init/4-fun-1-\',3,\n [{file,"src/async.erl"},{line,199}]}]}.\nRebalance Operation Id = ab3ac0513f00209afec652c290e48932', 'serverTime': '2024-02-01T21:51:10.629Z'} [2024-02-01 21:51:20,683] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'critical', 'code': 0, 'module': 'ns_rebalancer', 'tstamp': 1706853070600, 'shortText': 'message', 'text': "Failed to cleanup indexes: [{'ns_1@172.23.123.206',{error,eexist}}]", 'serverTime': '2024-02-01T21:51:10.600Z'} [2024-02-01 21:51:20,684] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'ns_orchestrator', 'tstamp': 1706853070584, 'shortText': 'message', 'text': "Starting rebalance, KeepNodes = ['ns_1@172.23.123.157','ns_1@172.23.123.206',\n 'ns_1@172.23.123.207'], EjectNodes = [], Failed over and being ejected nodes = []; no delta recovery nodes; Operation Id = ab3ac0513f00209afec652c290e48932", 'serverTime': '2024-02-01T21:51:10.584Z'} [2024-02-01 21:51:20,684] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'auto_failover', 'tstamp': 1706853070450, 'shortText': 'message', 'text': 'Enabled auto-failover with timeout 120 and max count 1', 'serverTime': '2024-02-01T21:51:10.450Z'} [2024-02-01 21:51:20,684] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'mb_master', 'tstamp': 1706853070447, 'shortText': 'message', 'text': "Haven't heard from a higher priority node or a master, so I'm taking over.", 'serverTime': '2024-02-01T21:51:10.447Z'} [2024-02-01 21:51:20,685] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 0, 'module': 'memcached_config_mgr', 'tstamp': 1706853060664, 'shortText': 'message', 'text': 'Hot-reloaded memcached.json for config change of the following keys: [<<"scramsha_fallback_salt">>]', 'serverTime': '2024-02-01T21:51:00.664Z'} [2024-02-01 21:51:20,685] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 3, 'module': 'ns_cluster', 'tstamp': 1706853060447, 'shortText': 'message', 'text': 'Node ns_1@172.23.123.157 joined cluster', 'serverTime': '2024-02-01T21:51:00.447Z'} [2024-02-01 21:51:20,685] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'warning', 'code': 0, 'module': 'mb_master', 'tstamp': 1706853060433, 'shortText': 'message', 'text': "Current master is strongly lower priority and I'll try to takeover", 'serverTime': '2024-02-01T21:51:00.433Z'} [2024-02-01 21:51:20,686] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.157', 'type': 'info', 'code': 1, 'module': 'menelaus_web_sup', 'tstamp': 1706853060407, 'shortText': 'web start ok', 'text': 'Couchbase Server has started on web port 8091 on node \'ns_1@172.23.123.157\'. Version: "7.6.0-2090-enterprise".', 'serverTime': '2024-02-01T21:51:00.407Z'} [2024-02-01 21:51:20,686] - [on_prem_rest_client:4326] ERROR - {'node': 'ns_1@172.23.123.206', 'type': 'info', 'code': 4, 'module': 'ns_node_disco', 'tstamp': 1706853057346, 'shortText': 'node up', 'text': "Node 'ns_1@172.23.123.206' saw that node 'ns_1@172.23.123.157' came up. Tags: []", 'serverTime': '2024-02-01T21:50:57.346Z'} *** TestRunner *** workspace is /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test fails is 21 21 Desc1: 7.6.0-2090 - plasma plasma - debian (0/21) python3 scripts/rerun_jobs.py 7.6.0-2090 --executor_jenkins_job --run_params=bucket_size=5000,reset_services=True,nodes_init=3,services_init=kv:n1ql-kv:n1ql-index,GROUP=SIMPLE,test_timeout=240,get-cbcollect-info=True,exclude_keywords=messageListener|LeaderServer|Encounter|denied|corruption|stat.*no.*such*,get-cbcollect-info=True,sirius_url=http://172.23.120.103:4000 INFO:merge_reports:Merging of report files from logs/**/*.xml INFO:merge_reports:-- logs/testrunner-24-Feb-01_19-54-58/report-24-Feb-01_19-54-58-gsi.collections_plasma.PlasmaCollectionsTests.xml -- INFO:merge_reports: Number of TestSuites=1 INFO:merge_reports: TestSuite#1) gsi.collections_plasma.PlasmaCollectionsTests, Number of Tests=21 INFO:merge_reports:Summary file is at /data/workspace/debian-p0-plasma-vset00-00-plasma-collections-sharding-simple-test/logs/testrunner-24-Feb-01_21-51-22/merged_summary/mergedreport-24-Feb-01_21-51-22-gsi.collections_plasma.PlasmaCollectionsTests.xml summary so far suite gsi.collections_plasma.PlasmaCollectionsTests , pass 0 , fail 21 failures so far... gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_kill_indexer_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_system_failure_create_drop_indexes_simple gsi.collections_plasma.PlasmaCollectionsTests.test_shard_json_corruption No more failed tests. Stopping reruns [description-setter] Description set: 7.6.0-2090 - plasma plasma - debian (0/21) [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'propfile' [EnvInject] - Variables injected successfully. Archiving artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE [BFA] Scanning build for known causes... [BFA] No failure causes found [BFA] Done. 0s Notifying upstream projects of job completion Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures) Sending email to: girish.benakappa@couchbase.com Triggering a new build of savejoblogs Triggering a new build of test-executor-cleanup Triggering a new build of test-executor-cleanup-aws Triggering a new build of post-to-gerrit Finished: UNSTABLE