Started by remote host 172.23.107.166 [EnvInject] - Loading node environment variables. Building remotely on slv-sc2402-32g-12c-col (P0 jython_slave) in workspace /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1 [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Done Running Prebuild steps [centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1] $ /bin/sh -xe /tmp/jenkins933356837819328886.sh ++ echo fts-moving-topology-scorch_5.5_P1-May-13-11:22:25-7.0.0-5127 ++ awk '{split($0,r,"-");print r[1],r[2]}' + desc='fts moving' + echo Desc: 7.0.0-5127 - fts moving - centos Desc: 7.0.0-5127 - fts moving - centos + echo newState=available + newState=available Success build forhudson.tasks.Shell@3bc01fe9 [description-setter] Description set: 7.0.0-5127 - fts moving - centos Success build forhudson.plugins.descriptionsetter.DescriptionSetterBuilder@3d0c3035 [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'propfile' [EnvInject] - Variables injected successfully. Success build fororg.jenkinsci.plugins.envinject.EnvInjectBuilder@4994af6d Cloning the remote Git repository Using shallow clone Cloning repository git://github.com/couchbase/testrunner > /usr/bin/git init /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1 # timeout=10 Fetching upstream changes from git://github.com/couchbase/testrunner > /usr/bin/git --version # timeout=10 > /usr/bin/git fetch --tags --progress git://github.com/couchbase/testrunner +refs/heads/*:refs/remotes/origin/* --depth=1 # timeout=30 > /usr/bin/git config remote.origin.url git://github.com/couchbase/testrunner # timeout=10 > /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config remote.origin.url git://github.com/couchbase/testrunner # timeout=10 Fetching upstream changes from git://github.com/couchbase/testrunner > /usr/bin/git fetch --tags --progress git://github.com/couchbase/testrunner +refs/heads/*:refs/remotes/origin/* --depth=1 # timeout=30 > /usr/bin/git rev-parse origin/master^{commit} # timeout=10 Checking out Revision 4e9c82da4b9b0062cf5c76934bed074c6c0cfc71 (origin/master) > /usr/bin/git config core.sparsecheckout # timeout=10 > /usr/bin/git checkout -f 4e9c82da4b9b0062cf5c76934bed074c6c0cfc71 > /usr/bin/git rev-list 4e9c82da4b9b0062cf5c76934bed074c6c0cfc71 # timeout=10 > /usr/bin/git tag -a -f -m Jenkins Build #344338 jenkins-test_suite_executor-344338 # timeout=10 No emails were triggered. [centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1] $ /bin/sh -xe /tmp/jenkins92001818607405109.sh + echo Desc: fts-moving-topology-scorch_5.5_P1-May-13-11:22:25-7.0.0-5127 Desc: fts-moving-topology-scorch_5.5_P1-May-13-11:22:25-7.0.0-5127 [description-setter] Could not determine description. [centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1] $ /bin/sh -xe /tmp/jenkins3669537098973157051.sh [centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1] $ /bin/sh -xe /tmp/jenkins3223603141929560782.sh + py_executable=python2 + echo 7.0.0-5127 + grep '7\.0' 7.0.0-5127 + py_executable=python3 + [[ 7.0.0-5127 > 6.5 ]] + git checkout master Switched to a new branch 'master' Branch master set up to track remote branch master from origin. + git pull origin master From git://github.com/couchbase/testrunner * branch master -> FETCH_HEAD Already up-to-date. + rerun_job=true + touch rerun_props_file + '[' false == false ']' + '[' true == true ']' + python3 scripts/rerun_jobs.py 7.0.0-5127 --executor_jenkins_job --manual_run [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'rerun_props_file' [EnvInject] - Variables injected successfully. [centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1] $ /bin/bash /tmp/jenkins5094896128324085199.sh 7.0.0-5127 Already on 'master' From git://github.com/couchbase/testrunner * branch master -> FETCH_HEAD Already up-to-date. Set ALLOW_HTP to False so test could run. HEAD is now at 4e9c82d CBQE-6905: Add sleep to allow couchbase to come back up in resume test the major release is 7 "172.23.100.16","172.23.100.17","172.23.100.18","172.23.100.19","172.23.100.20" Searching for httplib2 Best match: httplib2 0.18.1 Adding httplib2 0.18.1 to easy-install.pth file Using /usr/local/lib/python3.7/site-packages Processing dependencies for httplib2 Finished processing dependencies for httplib2 centos Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.s.uw.edu * extras: la.mirrors.clouvider.net * updates: repos.lax.quadranet.com Package 2:docker-1.13.1-205.git7d71120.el7.centos.x86_64 already installed and latest version Nothing to do Using default tag: latest Trying to pull repository docker.io/jamesdbloom/mockserver ... toomanyrequests: You have reached your pull rate limit. You may increase the limit by authenticating and upgrading: https://www.docker.com/increase-rate-limit [global] username:root password:couchbase [membase] rest_username:Administrator rest_password:password [servers] 1:_1 2:_2 3:_3 4:_4 5:_5 [elastic] ip:elastic-fts port:9200 es_username:Administrator es_password:password [_1] ip:dynamic port:8091 n1ql_port:18093 index_port:9102 services:kv [_2] ip:dynamic port:8091 [_3] ip:dynamic port:8091 [_4] ip:dynamic port:8091 [_5] ip:dynamic port:8091 python3 scripts/populateIni.py -s "172.23.100.16","172.23.100.17","172.23.100.18","172.23.100.19","172.23.100.20" -d None -a None -i /tmp/testexec_reformat.69218.ini -p centos -o /tmp/testexec.69218.ini -k {} INFO:root:SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.100.16 as root INFO:root:SSH Connected to 172.23.100.20 as root INFO:root:SSH Connected to 172.23.100.17 as root INFO:root:SSH Connected to 172.23.100.18 as root INFO:root:SSH Connected to 172.23.100.19 as root INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.16: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.20: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.18: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.19: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:command executed successfully with root INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.17: sh -c 'if [[ "$OSTYPE" == "darwin"* ]]; then sysctl hw.memsize|grep -Eo [0-9]; else grep MemTotal /proc/meminfo|grep -Eo [0-9]; fi' INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root in main the ini file is /tmp/testexec_reformat.69218.ini the given server info is "172.23.100.16","172.23.100.17","172.23.100.18","172.23.100.19","172.23.100.20" Collecting memory info from 172.23.100.16 Collecting memory info from 172.23.100.20 Collecting memory info from 172.23.100.18 Collecting memory info from 172.23.100.19 Collecting memory info from 172.23.100.17 the servers memory info is [('172.23.100.18', 4103208), ('172.23.100.19', 4103208), ('172.23.100.17', 4103208), ('172.23.100.16', 4103212), ('172.23.100.20', 4103212)] [global] username:root password:couchbase [membase] rest_username:Administrator rest_password:password [servers] 1:_1 2:_2 3:_3 4:_4 5:_5 [elastic] ip:elastic-fts port:9200 es_username:Administrator es_password:password [_1] ip:172.23.100.18 port:8091 n1ql_port:18093 index_port:9102 services:kv [_2] ip:172.23.100.19 port:8091 [_3] ip:172.23.100.17 port:8091 [_4] ip:172.23.100.16 port:8091 [_5] ip:172.23.100.20 port:8091 extra install is ,fts_query_limit=10000000 Local time: Thu 2021-05-13 11:22:41 PDT Universal time: Thu 2021-05-13 18:22:41 UTC RTC time: Thu 2021-05-13 18:22:41 Time zone: America/Los_Angeles (PDT, -0700) NTP enabled: no NTP synchronized: yes RTC in local TZ: no DST active: yes Last DST change: DST began at Sun 2021-03-14 01:59:59 PST Sun 2021-03-14 03:00:00 PDT Next DST change: DST ends (the clock jumps one hour backwards) at Sun 2021-11-07 01:59:59 PDT Sun 2021-11-07 01:00:00 PST python3 scripts/ssh.py -i /tmp/testexec_root.69218.ini iptables -F INFO:root:SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.100.17 as root INFO:root:SSH Connected to 172.23.100.20 as root INFO:root:SSH Connected to 172.23.100.16 as root INFO:root:SSH Connected to 172.23.100.19 as root INFO:root:SSH Connected to 172.23.100.18 as root INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.16: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.19: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.18: iptables -F INFO:root:command executed successfully with root INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.17: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.20: iptables -F INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root 172.23.100.19 172.23.100.16 172.23.100.18 172.23.100.20 172.23.100.17 python3 scripts/new_install.py -i /tmp/testexec.69218.ini -p timeout=1800,skip_local_download=False,get-cbcollect-info=True,version=7.0.0-5127,product=cb,debug_logs=True,ntp=True,url=,fts_query_limit=10000000 2021-05-13 11:22:43,344 - root - WARNING - URL: is not valid, will use version to locate build 2021-05-13 11:22:43,345 - root - INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:22:43,448 - root - INFO - SSH Connected to 172.23.100.18 as root 2021-05-13 11:22:43,717 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:44,009 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:44,010 - root - INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:22:44,109 - root - INFO - SSH Connected to 172.23.100.19 as root 2021-05-13 11:22:44,369 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:44,660 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:44,661 - root - INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:22:44,757 - root - INFO - SSH Connected to 172.23.100.17 as root 2021-05-13 11:22:45,010 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:45,294 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:45,296 - root - INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:22:45,393 - root - INFO - SSH Connected to 172.23.100.16 as root 2021-05-13 11:22:45,648 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:45,940 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:45,941 - root - INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:22:46,043 - root - INFO - SSH Connected to 172.23.100.20 as root 2021-05-13 11:22:46,294 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:46,586 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:46,588 - root - INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:22:46,686 - root - INFO - SSH Connected to 172.23.100.18 as root 2021-05-13 11:22:46,933 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:47,224 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:47,226 - root - INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:22:47,324 - root - INFO - SSH Connected to 172.23.100.19 as root 2021-05-13 11:22:47,582 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:47,873 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:47,874 - root - INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:22:47,973 - root - INFO - SSH Connected to 172.23.100.17 as root 2021-05-13 11:22:48,227 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:48,517 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:48,518 - root - INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:22:48,615 - root - INFO - SSH Connected to 172.23.100.16 as root 2021-05-13 11:22:48,863 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:49,151 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:49,153 - root - INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:22:49,248 - root - INFO - SSH Connected to 172.23.100.20 as root 2021-05-13 11:22:49,489 - root - INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:22:49,768 - root - INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:22:49,768 - root - INFO - Check if ntp is installed 2021-05-13 11:22:49,769 - root - INFO - This OS version centos 7 2021-05-13 11:22:49,769 - root - INFO - running command.raw on 172.23.100.18: systemctl status ntpd 2021-05-13 11:22:49,769 - root - INFO - Check if ntp is installed 2021-05-13 11:22:49,770 - root - INFO - Check if ntp is installed 2021-05-13 11:22:49,770 - root - INFO - This OS version centos 7 2021-05-13 11:22:49,770 - root - INFO - This OS version centos 7 2021-05-13 11:22:49,771 - root - INFO - running command.raw on 172.23.100.17: systemctl status ntpd 2021-05-13 11:22:49,771 - root - INFO - running command.raw on 172.23.100.19: systemctl status ntpd 2021-05-13 11:22:49,771 - root - INFO - Check if ntp is installed 2021-05-13 11:22:49,772 - root - INFO - Check if ntp is installed 2021-05-13 11:22:49,774 - root - INFO - This OS version centos 7 2021-05-13 11:22:49,774 - root - INFO - This OS version centos 7 2021-05-13 11:22:49,774 - root - INFO - running command.raw on 172.23.100.20: systemctl status ntpd 2021-05-13 11:22:49,774 - root - INFO - running command.raw on 172.23.100.16: systemctl status ntpd 2021-05-13 11:22:49,794 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,795 - root - INFO - running command.raw on 172.23.100.18: ntpstat 2021-05-13 11:22:49,797 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,797 - root - INFO - running command.raw on 172.23.100.17: ntpstat 2021-05-13 11:22:49,798 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,799 - root - INFO - running command.raw on 172.23.100.19: ntpstat 2021-05-13 11:22:49,800 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,800 - root - INFO - running command.raw on 172.23.100.16: ntpstat 2021-05-13 11:22:49,828 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,828 - root - INFO - running command.raw on 172.23.100.20: ntpstat 2021-05-13 11:22:49,894 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,895 - root - INFO - running command.raw on 172.23.100.18: timedatectl status 2021-05-13 11:22:49,897 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,897 - root - INFO - running command.raw on 172.23.100.16: timedatectl status 2021-05-13 11:22:49,898 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,898 - root - INFO - running command.raw on 172.23.100.19: timedatectl status 2021-05-13 11:22:49,900 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,900 - root - INFO - running command.raw on 172.23.100.17: timedatectl status 2021-05-13 11:22:49,925 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,925 - root - INFO - running command.raw on 172.23.100.20: timedatectl status 2021-05-13 11:22:49,940 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,940 - root - INFO - running command.raw on 172.23.100.16: date 2021-05-13 11:22:49,943 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,943 - root - INFO - running command.raw on 172.23.100.18: date 2021-05-13 11:22:49,988 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,988 - root - INFO - running command.raw on 172.23.100.17: date 2021-05-13 11:22:49,991 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,991 - root - INFO - running command.raw on 172.23.100.19: date 2021-05-13 11:22:49,994 - root - INFO - command executed successfully with root 2021-05-13 11:22:49,997 - root - INFO - command executed successfully with root 2021-05-13 11:22:50,009 - root - INFO - command executed successfully with root 2021-05-13 11:22:50,010 - root - INFO - running command.raw on 172.23.100.20: date 2021-05-13 11:22:50,041 - root - INFO - command executed successfully with root 2021-05-13 11:22:50,045 - root - INFO - command executed successfully with root 2021-05-13 11:22:50,062 - root - INFO - command executed successfully with root 2021-05-13 11:22:50,102 - root - INFO - ['Thu May 13 11:22:50 PDT 2021'] IP: 172.23.100.16 2021-05-13 11:22:50,104 - root - INFO - ['Thu May 13 11:22:50 PDT 2021'] IP: 172.23.100.18 2021-05-13 11:22:50,147 - root - INFO - ['Thu May 13 11:22:50 PDT 2021'] IP: 172.23.100.17 2021-05-13 11:22:50,155 - root - INFO - ['Thu May 13 11:22:50 PDT 2021'] IP: 172.23.100.19 2021-05-13 11:22:50,169 - root - INFO - ['Thu May 13 11:22:50 PDT 2021'] IP: 172.23.100.20 2021-05-13 11:22:50,169 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:22:50,172 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm is live 2021-05-13 11:22:50,173 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:22:50,174 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm is live 2021-05-13 11:22:50,174 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:22:50,175 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm is live 2021-05-13 11:22:50,175 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:22:50,176 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm is live 2021-05-13 11:22:50,177 - root - INFO - Trying to check is this url alive: http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:22:50,178 - root - INFO - This url http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm is live 2021-05-13 11:22:50,178 - root - INFO - Downloading build binary to /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm.. 2021-05-13 11:22:52,045 - root - INFO - Copying /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm to 172.23.100.18 2021-05-13 11:22:52,047 - root - INFO - Copying /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm to 172.23.100.19 2021-05-13 11:22:52,047 - root - INFO - Copying /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm to 172.23.100.17 2021-05-13 11:22:52,047 - root - INFO - Copying /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm to 172.23.100.16 2021-05-13 11:22:52,048 - root - INFO - Copying /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm to 172.23.100.20 2021-05-13 11:24:30,256 - root - INFO - Done copying build to 172.23.100.16. 2021-05-13 11:24:30,626 - root - INFO - Done copying build to 172.23.100.18. 2021-05-13 11:24:30,663 - root - INFO - Done copying build to 172.23.100.17. 2021-05-13 11:24:30,917 - root - INFO - Done copying build to 172.23.100.20. 2021-05-13 11:24:30,920 - root - INFO - Done copying build to 172.23.100.19. 2021-05-13 11:24:30,921 - root - INFO - running command.raw on 172.23.100.18: ls -lh /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:30,941 - root - INFO - command executed successfully with root 2021-05-13 11:24:30,941 - root - INFO - running command.raw on 172.23.100.18: curl -I http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2>&1 | grep Content-Length 2021-05-13 11:24:30,999 - root - INFO - command executed successfully with root 2021-05-13 11:24:30,999 - root - INFO - running command.raw on 172.23.100.18: cd /tmp/ && wc -c couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,051 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,051 - root - INFO - running command.raw on 172.23.100.19: ls -lh /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,070 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,070 - root - INFO - running command.raw on 172.23.100.19: curl -I http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2>&1 | grep Content-Length 2021-05-13 11:24:31,132 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,132 - root - INFO - running command.raw on 172.23.100.19: cd /tmp/ && wc -c couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,185 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,186 - root - INFO - running command.raw on 172.23.100.17: ls -lh /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,202 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,202 - root - INFO - running command.raw on 172.23.100.17: curl -I http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2>&1 | grep Content-Length 2021-05-13 11:24:31,261 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,261 - root - INFO - running command.raw on 172.23.100.17: cd /tmp/ && wc -c couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,313 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,313 - root - INFO - running command.raw on 172.23.100.16: ls -lh /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,330 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,330 - root - INFO - running command.raw on 172.23.100.16: curl -I http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2>&1 | grep Content-Length 2021-05-13 11:24:31,387 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,387 - root - INFO - running command.raw on 172.23.100.16: cd /tmp/ && wc -c couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,438 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,438 - root - INFO - running command.raw on 172.23.100.20: ls -lh /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,457 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,457 - root - INFO - running command.raw on 172.23.100.20: curl -I http://172.23.126.166/builds/latestbuilds/couchbase-server/cheshire-cat/5127/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2>&1 | grep Content-Length 2021-05-13 11:24:31,516 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,516 - root - INFO - running command.raw on 172.23.100.20: cd /tmp/ && wc -c couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm 2021-05-13 11:24:31,569 - root - INFO - command executed successfully with root 2021-05-13 11:24:31,569 - root - INFO - running command.raw on 172.23.100.18: umount -a -t nfs,nfs4 -f -l;systemctl stop couchbase-server; rpm -e couchbase-server; rm -rf /opt/couchbase; rm -rf /home/nonroot/opt/couchbase/ > /dev/null && echo 1 || echo 0 2021-05-13 11:24:31,569 - root - INFO - running command.raw on 172.23.100.19: umount -a -t nfs,nfs4 -f -l;systemctl stop couchbase-server; rpm -e couchbase-server; rm -rf /opt/couchbase; rm -rf /home/nonroot/opt/couchbase/ > /dev/null && echo 1 || echo 0 2021-05-13 11:24:31,570 - root - INFO - running command.raw on 172.23.100.17: umount -a -t nfs,nfs4 -f -l;systemctl stop couchbase-server; rpm -e couchbase-server; rm -rf /opt/couchbase; rm -rf /home/nonroot/opt/couchbase/ > /dev/null && echo 1 || echo 0 2021-05-13 11:24:31,570 - root - INFO - running command.raw on 172.23.100.16: umount -a -t nfs,nfs4 -f -l;systemctl stop couchbase-server; rpm -e couchbase-server; rm -rf /opt/couchbase; rm -rf /home/nonroot/opt/couchbase/ > /dev/null && echo 1 || echo 0 2021-05-13 11:24:31,571 - root - INFO - running command.raw on 172.23.100.20: umount -a -t nfs,nfs4 -f -l;systemctl stop couchbase-server; rpm -e couchbase-server; rm -rf /opt/couchbase; rm -rf /home/nonroot/opt/couchbase/ > /dev/null && echo 1 || echo 0 2021-05-13 11:24:34,665 - root - INFO - command executed with root but got an error ['warning: file /opt/couchbase/var/lib/couchbase/ip_start: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/ip saved as /opt/couchbase/var/lib/couchbase/ip.rpmsave', 'warning: file /opt/couchbase/var/lib/couchbase/config/dist_cfg: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/config/config.dat saved as /opt/couchbase/var/ ... 2021-05-13 11:24:34,897 - root - INFO - command executed with root but got an error ['warning: file /opt/couchbase/var/lib/couchbase/ip_start: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/ip saved as /opt/couchbase/var/lib/couchbase/ip.rpmsave', 'warning: file /opt/couchbase/var/lib/couchbase/config/dist_cfg: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/config/config.dat saved as /opt/couchbase/var/ ... 2021-05-13 11:24:34,909 - root - INFO - command executed with root but got an error ['warning: file /opt/couchbase/var/lib/couchbase/ip_start: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/ip saved as /opt/couchbase/var/lib/couchbase/ip.rpmsave', 'warning: file /opt/couchbase/var/lib/couchbase/config/dist_cfg: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/config/config.dat saved as /opt/couchbase/var/ ... 2021-05-13 11:24:34,957 - root - INFO - command executed with root but got an error ['warning: file /opt/couchbase/var/lib/couchbase/ip_start: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/ip saved as /opt/couchbase/var/lib/couchbase/ip.rpmsave', 'warning: file /opt/couchbase/var/lib/couchbase/config/dist_cfg: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/config/config.dat saved as /opt/couchbase/var/ ... 2021-05-13 11:24:35,298 - root - INFO - command executed with root but got an error ['warning: file /opt/couchbase/var/lib/couchbase/ip_start: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/ip saved as /opt/couchbase/var/lib/couchbase/ip.rpmsave', 'warning: file /opt/couchbase/var/lib/couchbase/config/dist_cfg: remove failed: No such file or directory', 'warning: /opt/couchbase/var/lib/couchbase/config/config.dat saved as /opt/couchbase/var/ ... 2021-05-13 11:24:35,693 - root - INFO - Done with uninstall on 172.23.100.16. 2021-05-13 11:24:35,693 - root - INFO - running command.raw on 172.23.100.16: yes | yum remove `rpm -qa | grep couchbase` 2021-05-13 11:24:35,880 - root - INFO - Done with uninstall on 172.23.100.19. 2021-05-13 11:24:35,880 - root - INFO - running command.raw on 172.23.100.19: yes | yum remove `rpm -qa | grep couchbase` 2021-05-13 11:24:35,933 - root - INFO - Done with uninstall on 172.23.100.20. 2021-05-13 11:24:35,933 - root - INFO - running command.raw on 172.23.100.20: yes | yum remove `rpm -qa | grep couchbase` 2021-05-13 11:24:35,980 - root - INFO - Done with uninstall on 172.23.100.18. 2021-05-13 11:24:35,980 - root - INFO - running command.raw on 172.23.100.18: yes | yum remove `rpm -qa | grep couchbase` 2021-05-13 11:24:36,273 - root - INFO - Done with uninstall on 172.23.100.17. 2021-05-13 11:24:36,273 - root - INFO - running command.raw on 172.23.100.17: yes | yum remove `rpm -qa | grep couchbase` 2021-05-13 11:24:37,030 - root - INFO - command executed with root but got an error ['Error: Need to pass a list of pkgs to remove', ' Mini usage:', '', 'erase PACKAGE...', '', 'Remove a package or packages from your system', '', 'aliases: remove, autoremove, erase-n, erase-na, erase-nevra, autoremove-n, autoremove-na, autoremove-nevra, remove-n, remove-na, remove-nevra'] ... 2021-05-13 11:24:37,030 - root - INFO - Waiting 20s to remove previous yum repo on 172.23.100.16.. 2021-05-13 11:24:37,187 - root - INFO - command executed with root but got an error ['Error: Need to pass a list of pkgs to remove', ' Mini usage:', '', 'erase PACKAGE...', '', 'Remove a package or packages from your system', '', 'aliases: remove, autoremove, erase-n, erase-na, erase-nevra, autoremove-n, autoremove-na, autoremove-nevra, remove-n, remove-na, remove-nevra'] ... 2021-05-13 11:24:37,187 - root - INFO - Waiting 20s to remove previous yum repo on 172.23.100.19.. 2021-05-13 11:24:37,263 - root - INFO - command executed with root but got an error ['Error: Need to pass a list of pkgs to remove', ' Mini usage:', '', 'erase PACKAGE...', '', 'Remove a package or packages from your system', '', 'aliases: remove, autoremove, erase-n, erase-na, erase-nevra, autoremove-n, autoremove-na, autoremove-nevra, remove-n, remove-na, remove-nevra'] ... 2021-05-13 11:24:37,263 - root - INFO - Waiting 20s to remove previous yum repo on 172.23.100.20.. 2021-05-13 11:24:37,310 - root - INFO - command executed with root but got an error ['Error: Need to pass a list of pkgs to remove', ' Mini usage:', '', 'erase PACKAGE...', '', 'Remove a package or packages from your system', '', 'aliases: remove, autoremove, erase-n, erase-na, erase-nevra, autoremove-n, autoremove-na, autoremove-nevra, remove-n, remove-na, remove-nevra'] ... 2021-05-13 11:24:37,311 - root - INFO - Waiting 20s to remove previous yum repo on 172.23.100.18.. 2021-05-13 11:24:37,831 - root - INFO - command executed with root but got an error ['Error: Need to pass a list of pkgs to remove', ' Mini usage:', '', 'erase PACKAGE...', '', 'Remove a package or packages from your system', '', 'aliases: remove, autoremove, erase-n, erase-na, erase-nevra, autoremove-n, autoremove-na, autoremove-nevra, remove-n, remove-na, remove-nevra'] ... 2021-05-13 11:24:37,831 - root - INFO - Waiting 20s to remove previous yum repo on 172.23.100.17.. 2021-05-13 11:24:57,050 - root - INFO - running command.raw on 172.23.100.16: /sbin/sysctl vm.swappiness=0; echo never > /sys/kernel/mm/transparent_hugepage/enabled; echo never > /sys/kernel/mm/transparent_hugepage/defrag; 2021-05-13 11:24:57,067 - root - INFO - command executed successfully with root 2021-05-13 11:24:57,067 - root - INFO - running command.raw on 172.23.100.16: yes | yum localinstall -y /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm > /dev/null && echo 1 || echo 0 2021-05-13 11:24:57,190 - root - INFO - running command.raw on 172.23.100.19: /sbin/sysctl vm.swappiness=0; echo never > /sys/kernel/mm/transparent_hugepage/enabled; echo never > /sys/kernel/mm/transparent_hugepage/defrag; 2021-05-13 11:24:57,208 - root - INFO - command executed successfully with root 2021-05-13 11:24:57,209 - root - INFO - running command.raw on 172.23.100.19: yes | yum localinstall -y /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm > /dev/null && echo 1 || echo 0 2021-05-13 11:24:57,283 - root - INFO - running command.raw on 172.23.100.20: /sbin/sysctl vm.swappiness=0; echo never > /sys/kernel/mm/transparent_hugepage/enabled; echo never > /sys/kernel/mm/transparent_hugepage/defrag; 2021-05-13 11:24:57,299 - root - INFO - command executed successfully with root 2021-05-13 11:24:57,299 - root - INFO - running command.raw on 172.23.100.20: yes | yum localinstall -y /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm > /dev/null && echo 1 || echo 0 2021-05-13 11:24:57,320 - root - INFO - running command.raw on 172.23.100.18: /sbin/sysctl vm.swappiness=0; echo never > /sys/kernel/mm/transparent_hugepage/enabled; echo never > /sys/kernel/mm/transparent_hugepage/defrag; 2021-05-13 11:24:57,337 - root - INFO - command executed successfully with root 2021-05-13 11:24:57,337 - root - INFO - running command.raw on 172.23.100.18: yes | yum localinstall -y /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm > /dev/null && echo 1 || echo 0 2021-05-13 11:24:57,840 - root - INFO - running command.raw on 172.23.100.17: /sbin/sysctl vm.swappiness=0; echo never > /sys/kernel/mm/transparent_hugepage/enabled; echo never > /sys/kernel/mm/transparent_hugepage/defrag; 2021-05-13 11:24:57,857 - root - INFO - command executed successfully with root 2021-05-13 11:24:57,857 - root - INFO - running command.raw on 172.23.100.17: yes | yum localinstall -y /tmp/couchbase-server-enterprise-7.0.0-5127-centos7.x86_64.rpm > /dev/null && echo 1 || echo 0 2021-05-13 11:25:56,754 - root - INFO - command executed with root but got an error ['Warning: RPMDB altered outside of yum.'] ... 2021-05-13 11:25:56,755 - root - INFO - running command.raw on 172.23.100.16: systemctl -q is-active couchbase-server && echo 1 || echo 0 2021-05-13 11:25:56,816 - root - INFO - command executed successfully with root 2021-05-13 11:25:56,816 - root - INFO - Done with install on 172.23.100.16. 2021-05-13 11:25:56,816 - root - INFO - Waiting 60s for 172.23.100.16 to be initialized.. 2021-05-13 11:25:57,762 - root - INFO - command executed with root but got an error ['Warning: RPMDB altered outside of yum.'] ... 2021-05-13 11:25:57,762 - root - INFO - running command.raw on 172.23.100.19: systemctl -q is-active couchbase-server && echo 1 || echo 0 2021-05-13 11:25:57,787 - root - INFO - command executed successfully with root 2021-05-13 11:25:57,787 - root - INFO - Done with install on 172.23.100.19. 2021-05-13 11:25:57,787 - root - INFO - Waiting 60s for 172.23.100.19 to be initialized.. 2021-05-13 11:25:58,396 - root - INFO - command executed with root but got an error ['Warning: RPMDB altered outside of yum.'] ... 2021-05-13 11:25:58,397 - root - INFO - running command.raw on 172.23.100.20: systemctl -q is-active couchbase-server && echo 1 || echo 0 2021-05-13 11:25:58,459 - root - INFO - command executed successfully with root 2021-05-13 11:25:58,459 - root - INFO - Done with install on 172.23.100.20. 2021-05-13 11:25:58,459 - root - INFO - Waiting 60s for 172.23.100.20 to be initialized.. 2021-05-13 11:25:59,558 - root - INFO - command executed with root but got an error ['Warning: RPMDB altered outside of yum.'] ... 2021-05-13 11:25:59,558 - root - INFO - running command.raw on 172.23.100.17: systemctl -q is-active couchbase-server && echo 1 || echo 0 2021-05-13 11:25:59,618 - root - INFO - command executed successfully with root 2021-05-13 11:25:59,618 - root - INFO - Done with install on 172.23.100.17. 2021-05-13 11:25:59,618 - root - INFO - Waiting 60s for 172.23.100.17 to be initialized.. 2021-05-13 11:26:00,053 - root - INFO - command executed with root but got an error ['Warning: RPMDB altered outside of yum.'] ... 2021-05-13 11:26:00,054 - root - INFO - running command.raw on 172.23.100.18: systemctl -q is-active couchbase-server && echo 1 || echo 0 2021-05-13 11:26:00,135 - root - INFO - command executed successfully with root 2021-05-13 11:26:00,135 - root - INFO - Done with install on 172.23.100.18. 2021-05-13 11:26:00,135 - root - INFO - Waiting 60s for 172.23.100.18 to be initialized.. 2021-05-13 11:26:56,876 - root - INFO - running command.raw on 172.23.100.16: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.16 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2021-05-13 11:26:57,145 - root - INFO - command executed successfully with root 2021-05-13 11:26:57,167 - root - INFO - running command.raw on 172.23.100.16: sed -i 's/export PATH/export PATH\nexport CBFT_ENV_OPTIONS=bleveMaxResultWindow=10000000/' /opt/couchbase/bin/couchbase-server; grep bleveMaxResultWindow=10000000 /opt/couchbase/bin/couchbase-server > /dev/null && echo 1 || echo 0 2021-05-13 11:26:57,185 - root - INFO - command executed successfully with root 2021-05-13 11:26:57,241 - root - INFO - 172.23.100.16 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:26:57,291 - root - INFO - Running systemd stop command on this server 2021-05-13 11:26:57,292 - root - INFO - running command.raw on 172.23.100.16: systemctl stop couchbase-server.service 2021-05-13 11:26:57,809 - root - INFO - running command.raw on 172.23.100.19: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.19 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2021-05-13 11:26:58,084 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,147 - root - INFO - running command.raw on 172.23.100.19: sed -i 's/export PATH/export PATH\nexport CBFT_ENV_OPTIONS=bleveMaxResultWindow=10000000/' /opt/couchbase/bin/couchbase-server; grep bleveMaxResultWindow=10000000 /opt/couchbase/bin/couchbase-server > /dev/null && echo 1 || echo 0 2021-05-13 11:26:58,164 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,222 - root - INFO - 172.23.100.19 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:26:58,275 - root - INFO - Running systemd stop command on this server 2021-05-13 11:26:58,275 - root - INFO - running command.raw on 172.23.100.19: systemctl stop couchbase-server.service 2021-05-13 11:26:58,496 - root - INFO - running command.raw on 172.23.100.20: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.20 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2021-05-13 11:26:58,777 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,840 - root - INFO - running command.raw on 172.23.100.20: sed -i 's/export PATH/export PATH\nexport CBFT_ENV_OPTIONS=bleveMaxResultWindow=10000000/' /opt/couchbase/bin/couchbase-server; grep bleveMaxResultWindow=10000000 /opt/couchbase/bin/couchbase-server > /dev/null && echo 1 || echo 0 2021-05-13 11:26:58,851 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,857 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,870 - root - INFO - 172.23.100.16 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:26:58,914 - root - INFO - 172.23.100.20 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:26:58,922 - root - INFO - Running systemd start command on this server 2021-05-13 11:26:58,922 - root - INFO - running command.raw on 172.23.100.16: systemctl start couchbase-server.service 2021-05-13 11:26:58,938 - root - INFO - command executed successfully with root 2021-05-13 11:26:58,966 - root - INFO - Running systemd stop command on this server 2021-05-13 11:26:58,966 - root - INFO - running command.raw on 172.23.100.20: systemctl stop couchbase-server.service 2021-05-13 11:26:59,678 - root - INFO - running command.raw on 172.23.100.17: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.17 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2021-05-13 11:26:59,853 - root - INFO - command executed successfully with root 2021-05-13 11:26:59,912 - root - INFO - 172.23.100.19 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:26:59,951 - root - INFO - command executed successfully with root 2021-05-13 11:26:59,965 - root - INFO - Running systemd start command on this server 2021-05-13 11:26:59,965 - root - INFO - running command.raw on 172.23.100.19: systemctl start couchbase-server.service 2021-05-13 11:26:59,982 - root - INFO - command executed successfully with root 2021-05-13 11:27:00,014 - root - INFO - running command.raw on 172.23.100.17: sed -i 's/export PATH/export PATH\nexport CBFT_ENV_OPTIONS=bleveMaxResultWindow=10000000/' /opt/couchbase/bin/couchbase-server; grep bleveMaxResultWindow=10000000 /opt/couchbase/bin/couchbase-server > /dev/null && echo 1 || echo 0 2021-05-13 11:27:00,032 - root - INFO - command executed successfully with root 2021-05-13 11:27:00,090 - root - INFO - 172.23.100.17 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:27:00,143 - root - INFO - Running systemd stop command on this server 2021-05-13 11:27:00,143 - root - INFO - running command.raw on 172.23.100.17: systemctl stop couchbase-server.service 2021-05-13 11:27:00,196 - root - INFO - running command.raw on 172.23.100.18: /opt/couchbase/bin/couchbase-cli node-init -c 172.23.100.18 -u Administrator -p password > /dev/null && echo 1 || echo 0; 2021-05-13 11:27:00,477 - root - INFO - command executed successfully with root 2021-05-13 11:27:00,541 - root - INFO - running command.raw on 172.23.100.18: sed -i 's/export PATH/export PATH\nexport CBFT_ENV_OPTIONS=bleveMaxResultWindow=10000000/' /opt/couchbase/bin/couchbase-server; grep bleveMaxResultWindow=10000000 /opt/couchbase/bin/couchbase-server > /dev/null && echo 1 || echo 0 2021-05-13 11:27:00,559 - root - INFO - command executed successfully with root 2021-05-13 11:27:00,618 - root - INFO - 172.23.100.18 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:27:00,648 - root - INFO - command executed successfully with root 2021-05-13 11:27:00,673 - root - INFO - Running systemd stop command on this server 2021-05-13 11:27:00,673 - root - INFO - running command.raw on 172.23.100.18: systemctl stop couchbase-server.service 2021-05-13 11:27:00,707 - root - INFO - 172.23.100.20 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:27:00,759 - root - INFO - Running systemd start command on this server 2021-05-13 11:27:00,759 - root - INFO - running command.raw on 172.23.100.20: systemctl start couchbase-server.service 2021-05-13 11:27:00,777 - root - INFO - command executed successfully with root 2021-05-13 11:27:01,662 - root - INFO - command executed successfully with root 2021-05-13 11:27:01,722 - root - INFO - 172.23.100.17 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:27:01,775 - root - INFO - Running systemd start command on this server 2021-05-13 11:27:01,775 - root - INFO - running command.raw on 172.23.100.17: systemctl start couchbase-server.service 2021-05-13 11:27:01,793 - root - INFO - command executed successfully with root 2021-05-13 11:27:02,282 - root - INFO - command executed successfully with root 2021-05-13 11:27:02,302 - root - INFO - 172.23.100.18 **** The linux version file /opt/couchbase/ VERSION.txt exists 2021-05-13 11:27:02,355 - root - INFO - Running systemd start command on this server 2021-05-13 11:27:02,356 - root - INFO - running command.raw on 172.23.100.18: systemctl start couchbase-server.service 2021-05-13 11:27:02,373 - root - INFO - command executed successfully with root 2021-05-13 11:27:08,948 - root - INFO - fts_query_limit set to 10000000 on 172.23.100.16 2021-05-13 11:27:08,950 - root - ERROR - socket error while connecting to http://172.23.100.16:8091/nodes/self error [Errno 111] Connection refused 2021-05-13 11:27:09,993 - root - INFO - fts_query_limit set to 10000000 on 172.23.100.19 2021-05-13 11:27:10,787 - root - INFO - fts_query_limit set to 10000000 on 172.23.100.20 2021-05-13 11:27:10,788 - root - ERROR - socket error while connecting to http://172.23.100.20:8091/nodes/self error [Errno 111] Connection refused 2021-05-13 11:27:11,003 - root - INFO - Setting KV memory quota as 2147 MB on 172.23.100.19 2021-05-13 11:27:11,004 - root - INFO - pools/default params : memoryQuota=2147 2021-05-13 11:27:11,010 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2021-05-13 11:27:11,010 - root - INFO - /node/controller/setupServices params on 172.23.100.19: 8091:hostname=None&user=Administrator&password=password&services=kv 2021-05-13 11:27:11,038 - root - INFO - --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:11,039 - root - INFO - settings/web params on 172.23.100.19:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:11,101 - root - INFO - --> status:True 2021-05-13 11:27:11,101 - root - INFO - Done with init on 172.23.100.19. 2021-05-13 11:27:11,101 - root - INFO - running command.raw on 172.23.100.19: ls -td /tmp/couchbase*.rpm | awk 'NR>2' | xargs rm -f 2021-05-13 11:27:11,124 - root - INFO - command executed successfully with root 2021-05-13 11:27:11,124 - root - INFO - Done with cleanup on 172.23.100.19. 2021-05-13 11:27:11,803 - root - INFO - fts_query_limit set to 10000000 on 172.23.100.17 2021-05-13 11:27:11,804 - root - ERROR - socket error while connecting to http://172.23.100.17:8091/nodes/self error [Errno 111] Connection refused 2021-05-13 11:27:12,383 - root - INFO - fts_query_limit set to 10000000 on 172.23.100.18 2021-05-13 11:27:12,963 - root - INFO - Setting KV memory quota as 2147 MB on 172.23.100.16 2021-05-13 11:27:12,963 - root - INFO - pools/default params : memoryQuota=2147 2021-05-13 11:27:12,970 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2021-05-13 11:27:12,970 - root - INFO - /node/controller/setupServices params on 172.23.100.16: 8091:hostname=None&user=Administrator&password=password&services=kv 2021-05-13 11:27:12,999 - root - INFO - --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:12,999 - root - INFO - settings/web params on 172.23.100.16:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:13,060 - root - INFO - --> status:True 2021-05-13 11:27:13,060 - root - INFO - Done with init on 172.23.100.16. 2021-05-13 11:27:13,060 - root - INFO - running command.raw on 172.23.100.16: ls -td /tmp/couchbase*.rpm | awk 'NR>2' | xargs rm -f 2021-05-13 11:27:13,079 - root - INFO - command executed successfully with root 2021-05-13 11:27:13,079 - root - INFO - Done with cleanup on 172.23.100.16. 2021-05-13 11:27:13,393 - root - INFO - Setting KV memory quota as 2147 MB on 172.23.100.18 2021-05-13 11:27:13,394 - root - INFO - pools/default params : memoryQuota=2147 2021-05-13 11:27:13,400 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2021-05-13 11:27:13,400 - root - INFO - /node/controller/setupServices params on 172.23.100.18: 8091:hostname=None&user=Administrator&password=password&services=kv 2021-05-13 11:27:13,429 - root - INFO - --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:13,429 - root - INFO - settings/web params on 172.23.100.18:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:13,489 - root - INFO - --> status:True 2021-05-13 11:27:13,489 - root - INFO - Done with init on 172.23.100.18. 2021-05-13 11:27:13,489 - root - INFO - running command.raw on 172.23.100.18: ls -td /tmp/couchbase*.rpm | awk 'NR>2' | xargs rm -f 2021-05-13 11:27:13,513 - root - INFO - command executed successfully with root 2021-05-13 11:27:13,513 - root - INFO - Done with cleanup on 172.23.100.18. 2021-05-13 11:27:14,801 - root - INFO - Setting KV memory quota as 2147 MB on 172.23.100.20 2021-05-13 11:27:14,801 - root - INFO - pools/default params : memoryQuota=2147 2021-05-13 11:27:14,808 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2021-05-13 11:27:14,808 - root - INFO - /node/controller/setupServices params on 172.23.100.20: 8091:hostname=None&user=Administrator&password=password&services=kv 2021-05-13 11:27:14,836 - root - INFO - --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:14,836 - root - INFO - settings/web params on 172.23.100.20:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:14,896 - root - INFO - --> status:True 2021-05-13 11:27:14,896 - root - INFO - Done with init on 172.23.100.20. 2021-05-13 11:27:14,896 - root - INFO - running command.raw on 172.23.100.20: ls -td /tmp/couchbase*.rpm | awk 'NR>2' | xargs rm -f 2021-05-13 11:27:14,920 - root - INFO - command executed successfully with root 2021-05-13 11:27:14,920 - root - INFO - Done with cleanup on 172.23.100.20. 2021-05-13 11:27:15,818 - root - INFO - Setting KV memory quota as 2147 MB on 172.23.100.17 2021-05-13 11:27:15,818 - root - INFO - pools/default params : memoryQuota=2147 2021-05-13 11:27:15,824 - root - INFO - --> init_node_services(Administrator,password,None,8091,['kv']) 2021-05-13 11:27:15,824 - root - INFO - /node/controller/setupServices params on 172.23.100.17: 8091:hostname=None&user=Administrator&password=password&services=kv 2021-05-13 11:27:15,853 - root - INFO - --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:15,853 - root - INFO - settings/web params on 172.23.100.17:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:15,912 - root - INFO - --> status:True 2021-05-13 11:27:15,912 - root - INFO - Done with init on 172.23.100.17. 2021-05-13 11:27:15,912 - root - INFO - running command.raw on 172.23.100.17: ls -td /tmp/couchbase*.rpm | awk 'NR>2' | xargs rm -f 2021-05-13 11:27:15,932 - root - INFO - command executed successfully with root 2021-05-13 11:27:15,932 - root - INFO - Done with cleanup on 172.23.100.17. 2021-05-13 11:27:31,736 - root - INFO - ---------------------------------------------------------------------------------------------------- 2021-05-13 11:27:31,759 - root - INFO - cluster:C1 node:172.23.100.18:8091 version:7.0.0-5127-enterprise aFamily:inet services:['kv'] 2021-05-13 11:27:31,759 - root - INFO - cluster:C2 node:172.23.100.19:8091 version:7.0.0-5127-enterprise aFamily:inet services:['kv'] 2021-05-13 11:27:31,759 - root - INFO - cluster:C3 node:172.23.100.17:8091 version:7.0.0-5127-enterprise aFamily:inet services:['kv'] 2021-05-13 11:27:31,759 - root - INFO - cluster:C4 node:172.23.100.16:8091 version:7.0.0-5127-enterprise aFamily:inet services:['kv'] 2021-05-13 11:27:31,759 - root - INFO - cluster:C5 node:172.23.100.20:8091 version:7.0.0-5127-enterprise aFamily:inet services:['kv'] 2021-05-13 11:27:31,759 - root - INFO - ---------------------------------------------------------------------------------------------------- 2021-05-13 11:27:31,759 - root - INFO - ---------------------------------------------------------------------------------------------------- 2021-05-13 11:27:31,759 - root - INFO - ---------------------------------------------------------------------------------------------------- 2021-05-13 11:27:31,759 - root - INFO - INSTALL COMPLETED ON: 172.23.100.18 2021-05-13 11:27:31,759 - root - INFO - INSTALL COMPLETED ON: 172.23.100.19 2021-05-13 11:27:31,759 - root - INFO - INSTALL COMPLETED ON: 172.23.100.17 2021-05-13 11:27:31,759 - root - INFO - INSTALL COMPLETED ON: 172.23.100.16 2021-05-13 11:27:31,759 - root - INFO - INSTALL COMPLETED ON: 172.23.100.20 2021-05-13 11:27:31,759 - root - INFO - ---------------------------------------------------------------------------------------------------- 2021-05-13 11:27:31,759 - root - INFO - TOTAL INSTALL TIME = 288 seconds success INFO:root:SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 INFO:root:SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 INFO:root:SSH Connected to 172.23.100.19 as root INFO:root:SSH Connected to 172.23.100.18 as root INFO:root:SSH Connected to 172.23.100.16 as root INFO:root:SSH Connected to 172.23.100.17 as root INFO:root:SSH Connected to 172.23.100.20 as root INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:os_distro: CentOS, os_version: centos 7, is_linux_distro: True INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.17: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.20: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.19: iptables -F INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.16: iptables -F INFO:root:command executed successfully with root INFO:root:extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 INFO:root:running command.raw on 172.23.100.18: iptables -F INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root INFO:root:command executed successfully with root 172.23.100.20 172.23.100.17 172.23.100.19 172.23.100.16 172.23.100.18 Need to set ALLOW_HTP back to True to do git pull branch Submodule 'java_sdk_client' (https://github.com/couchbaselabs/java_sdk_client) registered for path 'java_sdk_client' Cloning into 'java_sdk_client'... Submodule path 'java_sdk_client': checked out '8ae805783999ba5069f66511449402eda9d73931' Loaded plugins: fastestmirror Loading mirror speeds from cached hostfile * base: centos.s.uw.edu * extras: mirror.keystealth.org * updates: repos.lax.quadranet.com Package maven-3.0.5-17.el7.noarch already installed and latest version Nothing to do find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_64’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_65’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_66’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_67’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_68’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_69’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_70’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_71’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_72’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_73’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_74’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_75’: No such file or directory find: ‘/data/workspace/debian10-p0-os_certify-vset00-00-fts/logs/testrunner-21-Apr-12_07-05-52/test_76’: No such file or directory find: ‘/root/workspace/*/logs/*’: No such file or directory find: ‘/root/workspace/’: No such file or directory python3: no process found python3 testrunner.py -i /tmp/testexec.69218.ini -c fts/py-fts-movingtopology.conf -p get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,GROUP=P1,index_type=scorch,fts_quota=990,get-cbcollect-info=True -d failed=http://qa.sc.couchbase.com/job/test_suite_executor/344262/ INFO:root:__main__ INFO:__main__:TestRunner: parsing args... INFO:__main__:Checking arguments... INFO:__main__:Conf filename: fts/py-fts-movingtopology.conf INFO:__main__:Test prefix: fts.moving_topology_fts.MovingTopFTS INFO:__main__:Downloading http://qa.sc.couchbase.com/job/test_suite_executor/344262//testReport/api/xml?pretty=true to logs/test_suite_executor_344262__testresult.xml INFO:__main__:Loading result data from logs/test_suite_executor_344262__testresult.xml INFO:__main__:-- logs/test_suite_executor_344262__testresult.xml -- INFO:__main__:TestRunner: start... INFO:__main__:Global Test input params: INFO:__main__: Number of tests initially selected before GROUP filters: 2 INFO:__main__:--> Running test: fts.moving_topology_fts.MovingTopFTS.update_index_during_failover,items=100000,cluster=D:D+F:D+F,GROUP=P1,get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,index_type=scorch,fts_quota=990 INFO:__main__:Logs folder: /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_1 *** TestRunner *** {'GROUP': 'P1', 'cluster_name': 'testexec.69218', 'conf_file': 'fts/py-fts-movingtopology.conf', 'disable_HTP': 'True', 'fts_quota': '990', 'get-cbcollect-info': 'True', 'get-logs': 'False', 'index_type': 'scorch', 'ini': '/tmp/testexec.69218.ini', 'num_nodes': 5, 'spec': 'py-fts-movingtopology', 'stop-on-failure': 'False'} Only cases in GROUPs 'P1' will be executed Logs will be stored at /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_1 ./testrunner -i /tmp/testexec.69218.ini -p get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,GROUP=P1,index_type=scorch,fts_quota=990,get-cbcollect-info=True -t fts.moving_topology_fts.MovingTopFTS.update_index_during_failover,items=100000,cluster=D:D+F:D+F,GROUP=P1,get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,index_type=scorch,fts_quota=990 Test Input params: {'items': '100000', 'cluster': 'D:D+F:D+F', 'GROUP': 'P1', 'get-cbcollect-info': 'True', 'disable_HTP': 'True', 'get-logs': 'False', 'stop-on-failure': 'False', 'index_type': 'scorch', 'fts_quota': '990', 'ini': '/tmp/testexec.69218.ini', 'cluster_name': 'testexec.69218', 'spec': 'py-fts-movingtopology', 'conf_file': 'fts/py-fts-movingtopology.conf', 'num_nodes': 5, 'case_number': 1, 'logs_folder': '/data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_1'} Run before suite setup for fts.moving_topology_fts.MovingTopFTS.update_index_during_failover suite_setUp (fts.moving_topology_fts.MovingTopFTS) ... -->before_suite_name:fts.moving_topology_fts.MovingTopFTS.suite_setUp,suite: ]> 2021-05-13 11:27:42 | INFO | MainProcess | MainThread | [rest_client.set_fts_ram_quota] SUCCESS: FTS RAM quota set to 990mb 2021-05-13 11:27:42 | INFO | MainProcess | MainThread | [fts_base.setUp] ==== FTSbasetests setup is started for test #1 suite_setUp ==== 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [rest_client.set_fts_ram_quota] SUCCESS: FTS RAM quota set to 990mb 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] removing nodes from cluster ... 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] cleanup [ip:172.23.100.18 port:8091 ssh_username:root] 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.18, nothing to delete 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.18:8091 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.18:8091 is running 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'... 2021-05-13 11:27:43 | ERROR | MainProcess | MainThread | [rest_client._http_request] DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] b'"User was not found."' 2021-05-13 11:27:43 | INFO | MainProcess | MainThread | [fts_base.init_cluster] Initializing Cluster ... 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.100.18 port:8091 ssh_username:root, nodes/self 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '39', 'memoryTotal': 4201684992, 'memoryFree': 3676450816, 'mcdMemoryReserved': 3205, 'mcdMemoryAllocated': 3205, 'status': 'healthy', 'hostname': '172.23.100.18:8091', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-5127-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 2147, 'moxi': 11211, 'memcached': 11210, 'id': 'ns_1@cb.local', 'ip': '172.23.100.18', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 4007, 'curr_items': 0} 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2147 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 172.23.100.18:8091:port=8091&username=Administrator&password=password 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). 2021-05-13 11:27:44 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2021-05-13 11:27:44 | INFO | MainProcess | MainThread | [fts_base.init_cluster] 172.23.100.19 will be configured with services kv,fts 2021-05-13 11:27:44 | INFO | MainProcess | MainThread | [fts_base.init_cluster] 172.23.100.17 will be configured with services kv,fts 2021-05-13 11:27:45 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.100.19:8091 to cluster 2021-05-13 11:27:45 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.100.19:8091 to this cluster @172.23.100.18:8091 2021-05-13 11:27:55 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] rebalance progress took 10.01 seconds 2021-05-13 11:27:55 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:28:14 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.100.17:8091 to cluster 2021-05-13 11:28:14 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.100.17:8091 to this cluster @172.23.100.18:8091 2021-05-13 11:28:24 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] rebalance progress took 10.01 seconds 2021-05-13 11:28:24 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:28:41 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} 2021-05-13 11:28:41 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started 2021-05-13 11:28:51 | INFO | MainProcess | Cluster_Thread | [task.check] Rebalance - status: none, progress: 100.00% 2021-05-13 11:28:51 | INFO | MainProcess | Cluster_Thread | [task.check] rebalancing was completed with progress: 100% in 10.018304586410522 sec 2021-05-13 11:28:51 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:28:51 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [remote_util.execute_command_raw] command executed successfully with root 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [fts_base._enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 172.23.100.18 2021-05-13 11:28:52 | ERROR | MainProcess | MainThread | [rest_client._http_request] DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."' 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [fts_base._set_bleve_max_result_window] updating bleve_max_result_window of node : ip:172.23.100.17 port:8091 ssh_username:root 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [rest_client.set_bleve_max_result_window] {"bleveMaxResultWindow": "100000000"} 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [rest_client.set_bleve_max_result_window] Updated bleveMaxResultWindow 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [fts_base._set_bleve_max_result_window] updating bleve_max_result_window of node : ip:172.23.100.19 port:8091 ssh_username:root 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [rest_client.set_bleve_max_result_window] {"bleveMaxResultWindow": "100000000"} 2021-05-13 11:28:52 | INFO | MainProcess | MainThread | [rest_client.set_bleve_max_result_window] Updated bleveMaxResultWindow 2021-05-13 11:28:52 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.100.18:8091/pools/default/buckets with param: name=default&ramQuotaMB=897&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore 2021-05-13 11:28:52 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.03 seconds to create bucket default 2021-05-13 11:28:52 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:53 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 0 2021-05-13 11:28:53 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:54 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 1 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:54 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:55 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 2 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:55 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 3 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:55 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 4 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:55 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:56 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:28:56 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:28:56 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:28:56 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 5 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [fts_base.setUp] ==== FTSbasetests setup is finished for test #1 suite_setUp ==== 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:28:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:28:57 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:57 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:28:57 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:28:57 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:28:57 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:28:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:28:59 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:28:59 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [moving_topology_fts.suite_setUp] *** MovingTopFTS: suite_setUp() *** 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:03 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:29:04 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:29:05 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:29:06 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:06 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:10 | INFO | MainProcess | MainThread | [fts_base.tearDown] ==== FTSbasetests cleanup is started for test #1 suite_setUp ==== 2021-05-13 11:29:10 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] removing nodes from cluster ... 2021-05-13 11:29:10 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] cleanup [ip:172.23.100.18 port:8091 ssh_username:root, ip:172.23.100.19 port:8091 ssh_username:root, ip:172.23.100.17 port:8091 ssh_username:root] 2021-05-13 11:29:10 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 172.23.100.18 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete.... 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [rest_client.bucket_exists] node 172.23.100.18 existing buckets : [] 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 172.23.100.18 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [cluster_helper.cleanup_cluster] rebalancing all nodes in order to remove nodes 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.19', 'user': 'Administrator', 'password': 'password'} 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [rest_client.rebalance] rebalance operation started 2021-05-13 11:29:11 | INFO | MainProcess | MainThread | [rest_client._rebalance_status_and_progress] rebalance percentage : 0.00 % 2021-05-13 11:29:21 | INFO | MainProcess | MainThread | [rest_client._rebalance_status_and_progress] rebalance percentage : 66.00 % 2021-05-13 11:29:41 | INFO | MainProcess | MainThread | [rest_client.monitorRebalance] rebalance progress took 30.04 seconds 2021-05-13 11:29:41 | INFO | MainProcess | MainThread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.cleanup_cluster] removed all the nodes from cluster associated with ip:172.23.100.18 port:8091 ssh_username:root ? [('ns_1@172.23.100.17', 8091), ('ns_1@172.23.100.19', 8091)] 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.18:8091 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.18:8091 is running 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.19, nothing to delete 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.19:8091 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.19:8091 is running 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.17, nothing to delete 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.17:8091 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.17:8091 is running Cluster instance shutdown with force 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'... 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [ntonencryptionBase.disable_nton_cluster] Disable up node to node encryption - status = disable and clusterEncryptionLevel = control 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [ntonencryptionBase.change_cluster_encryption_cli] Changing encryption Level - clusterEncryptionLevel = control 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:29:51 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:52 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:52 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:29:52 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:29:52 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:29:52 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:53 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:54 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:29:55 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:29:55 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:29:55 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:55 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:55 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [ntonencryptionBase.change_cluster_encryption_cli] Output of setting-security command is ["ERROR: clusterEncryptionLevel - Can't set cluster encryption level when cluster encryption is disabled."] 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [ntonencryptionBase.change_cluster_encryption_cli] Error of setting-security command is [] 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [ntonencryptionBase.ntonencryption_cli] Changing node-to-node-encryption to disable 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:56 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:29:57 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:29:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:29:58 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:58 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:58 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:29:58 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:29:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:29:59 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:29:59 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:29:59 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:29:59 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [ntonencryptionBase.ntonencryption_cli] Output of node-to-node-encryption command is ['Turned off encryption for node: http://[::1]:8091', 'SUCCESS: Switched node-to-node encryption off'] 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [ntonencryptionBase.ntonencryption_cli] Error of node-to-node-encryption command is [] 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [fts_base.tearDown] ==== FTSbasetests cleanup is finished for test #1 suite_setUp === 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [fts_base.tearDown] closing all ssh connections 2021-05-13 11:30:00 | INFO | MainProcess | MainThread | [fts_base.tearDown] closing all memcached connections ok ---------------------------------------------------------------------- Ran 1 test in 137.851s OK update_index_during_failover (fts.moving_topology_fts.MovingTopFTS) ... Cluster instance shutdown with force -->result: 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [rest_client.set_fts_ram_quota] SUCCESS: FTS RAM quota set to 990mb 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is started for test #1 update_index_during_failover ==== 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [rest_client.set_fts_ram_quota] SUCCESS: FTS RAM quota set to 990mb 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] removing nodes from cluster ... 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] cleanup [ip:172.23.100.18 port:8091 ssh_username:root] 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.18, nothing to delete 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.18:8091 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.18:8091 is running 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'... 2021-05-13 11:30:00 | ERROR | MainProcess | test_thread | [rest_client._http_request] DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] b'"User was not found."' 2021-05-13 11:30:00 | INFO | MainProcess | test_thread | [fts_base.init_cluster] Initializing Cluster ... 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.100.18 port:8091 ssh_username:root, nodes/self 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [task.execute] {'uptime': '177', 'memoryTotal': 4201684992, 'memoryFree': 3610521600, 'mcdMemoryReserved': 3205, 'mcdMemoryAllocated': 3205, 'status': 'healthy', 'hostname': '172.23.100.18:8091', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-5127-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 2147, 'moxi': 11211, 'memcached': 11210, 'id': 'ns_1@172.23.100.18', 'ip': '172.23.100.18', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 4007, 'curr_items': 0} 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2147 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> in init_cluster...Administrator,password,8091 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 172.23.100.18:8091:port=8091&username=Administrator&password=password 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] --> status:True 2021-05-13 11:30:01 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully with root 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). 2021-05-13 11:30:02 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2021-05-13 11:30:02 | INFO | MainProcess | test_thread | [fts_base.init_cluster] 172.23.100.19 will be configured with services kv,fts 2021-05-13 11:30:02 | INFO | MainProcess | test_thread | [fts_base.init_cluster] 172.23.100.17 will be configured with services kv,fts 2021-05-13 11:30:03 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.100.19:8091 to cluster 2021-05-13 11:30:03 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.100.19:8091 to this cluster @172.23.100.18:8091 2021-05-13 11:30:13 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] rebalance progress took 10.01 seconds 2021-05-13 11:30:13 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:30:31 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 172.23.100.17:8091 to cluster 2021-05-13 11:30:31 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @172.23.100.17:8091 to this cluster @172.23.100.18:8091 2021-05-13 11:30:41 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] rebalance progress took 10.01 seconds 2021-05-13 11:30:41 | INFO | MainProcess | Cluster_Thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:30:58 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} 2021-05-13 11:30:58 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started 2021-05-13 11:31:08 | INFO | MainProcess | Cluster_Thread | [task.check] Rebalance - status: none, progress: 100.00% 2021-05-13 11:31:08 | INFO | MainProcess | Cluster_Thread | [task.check] rebalancing was completed with progress: 100% in 10.017857313156128 sec 2021-05-13 11:31:08 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:31:08 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [remote_util.enable_diag_eval_on_non_local_hosts] ['ok'] 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [fts_base._enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 172.23.100.18 2021-05-13 11:31:09 | ERROR | MainProcess | test_thread | [rest_client._http_request] DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [internal_user.delete_user] Exception while deleting user. Exception is -b'"User was not found."' 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [fts_base._set_bleve_max_result_window] updating bleve_max_result_window of node : ip:172.23.100.17 port:8091 ssh_username:root 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [rest_client.set_bleve_max_result_window] {"bleveMaxResultWindow": "100000000"} 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [rest_client.set_bleve_max_result_window] Updated bleveMaxResultWindow 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [fts_base._set_bleve_max_result_window] updating bleve_max_result_window of node : ip:172.23.100.19 port:8091 ssh_username:root 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [rest_client.set_bleve_max_result_window] {"bleveMaxResultWindow": "100000000"} 2021-05-13 11:31:09 | INFO | MainProcess | test_thread | [rest_client.set_bleve_max_result_window] Updated bleveMaxResultWindow 2021-05-13 11:31:09 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.100.18:8091/pools/default/buckets with param: name=default&ramQuotaMB=897&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore 2021-05-13 11:31:09 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.03 seconds to create bucket default 2021-05-13 11:31:09 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:10 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:11 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 0 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:11 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 1 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:11 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:12 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 2 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:12 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 3 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:12 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 4 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : default in 172.23.100.18 to accept set ops 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:12 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:13 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 5 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [fts_base.setUp] ==== FTSbasetests setup is finished for test #1 update_index_during_failover ==== 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:13 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:31:14 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:31:15 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:31:16 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:31:16 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:31:28 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.100.17:11210 default 2021-05-13 11:31:28 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.100.18:11210 default 2021-05-13 11:31:28 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 172.23.100.19:11210 default 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [fts_base.load_data] Loading phase complete! 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [fts_base.create] Checking if index already exists ... 2021-05-13 11:31:49 | ERROR | MainProcess | test_thread | [rest_client._http_request] GET http://172.23.100.19:8094/api/index/default_index_1 body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: rest_auth: preparePerms, err: index not found b'{"error":"rest_auth: preparePerms, err: index not found","request":"","status":"fail"}\n' auth: Administrator:password 2021-05-13 11:31:49 | ERROR | MainProcess | test_thread | [rest_client._http_request] DELETE http://172.23.100.19:8094/api/index/default_index_1 body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: rest_auth: preparePerms, err: index not found b'{"error":"rest_auth: preparePerms, err: index not found","request":"","status":"fail"}\n' auth: Administrator:password 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [fts_base.create] Creating fulltext-index default_index_1 on 172.23.100.19 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] {"type": "fulltext-index", "name": "default_index_1", "uuid": "", "params": {"store": {"kvStoreName": "mossStore", "mossStoreOptions": {}, "indexType": "scorch"}}, "sourceType": "couchbase", "sourceName": "default", "sourceUUID": "", "planParams": {"numReplicas": 0, "maxPartitionsPerPIndex": 171, "indexPartitions": 1}, "sourceParams": {}} 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [rest_client.create_fts_index] Index default_index_1 created 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validating index distribution for default_index_1 ... 2021-05-13 11:31:49 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 5 secs. No pindexes found, waiting for index to get created ... 2021-05-13 11:31:54 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Number of PIndexes = 1 2021-05-13 11:31:54 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Every pIndex serves 1024 partitions or lesser 2021-05-13 11:31:54 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Expecting num of partitions in each node in range -512-1024 2021-05-13 11:31:54 | INFO | MainProcess | test_thread | [fts_base.is_index_partitioned_balanced] Validated: Node cb0bd236fc11a1cb49c54d1faebf9fac houses 1 pindexes which serve 1024 partitions 2021-05-13 11:31:54 | INFO | MainProcess | test_thread | [fts_base.sleep] sleep for 10 secs. ... 2021-05-13 11:32:04 | INFO | MainProcess | test_thread | [moving_topology_fts.update_index_during_failover] Index building has begun... 2021-05-13 11:32:04 | INFO | MainProcess | test_thread | [moving_topology_fts.update_index_during_failover] Index count for default_index_1: 39468 2021-05-13 11:32:04 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:05 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 100000, docs in FTS index 'default_index_1': 39468 2021-05-13 11:32:11 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:11 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 100000, docs in FTS index 'default_index_1': 56036 2021-05-13 11:32:11 | INFO | MainProcess | failover | [fts_base.__async_failover] Starting failover for nodes:[ip:172.23.100.17 port:8091 ssh_username:root] at C1 cluster 172.23.100.18 2021-05-13 11:32:11 | INFO | MainProcess | test_thread | [moving_topology_fts.update_index_during_failover] {'type': 'fulltext-index', 'name': 'default_index_1', 'uuid': '878b6d673471de86', 'sourceType': 'gocbcore', 'sourceName': 'default', 'sourceUUID': '8e67b2e483f75aeb39cbe641fdd47e14', 'planParams': {'maxPartitionsPerPIndex': 1024, 'indexPartitions': 1}, 'params': {'doc_config': {'docid_prefix_delim': '', 'docid_regexp': '', 'mode': 'type_field', 'type_field': 'type'}, 'mapping': {'analysis': {}, 'default_analyzer': 'standard', 'default_datetime_parser': 'dateTimeOptional', 'default_field': '_all', 'default_mapping': {'dynamic': True, 'enabled': True}, 'default_type': '_default', 'docvalues_dynamic': True, 'index_dynamic': True, 'store_dynamic': False, 'type_field': '_type'}, 'store': {'indexType': 'scorch', 'mossStoreOptions': {}, 'segmentVersion': 15}}, 'sourceParams': {}} 2021-05-13 11:32:11 | INFO | MainProcess | update_index | [fts_base.update] Updating fulltext-index default_index_1 on 172.23.100.17 2021-05-13 11:32:11 | INFO | MainProcess | update_index | [rest_client.update_fts_index] { "type": "fulltext-index", "name": "default_index_1", "uuid": "878b6d673471de86", "params": { "store": { "kvStoreName": "mossStore", "mossStoreOptions": {}, "indexType": "scorch" } }, "sourceType": "couchbase", "sourceName": "default", "sourceUUID": "", "planParams": { "numReplicas": 0, "maxPartitionsPerPIndex": 64, "indexPartitions": 1 }, "sourceParams": {} } 2021-05-13 11:32:11 | INFO | MainProcess | update_index | [rest_client.update_fts_index] Index/alias default_index_1 updated 2021-05-13 11:32:12 | INFO | MainProcess | Cluster_Thread | [task._failover_nodes] Failing over 172.23.100.17:8091 with graceful=False 2021-05-13 11:32:12 | INFO | MainProcess | Cluster_Thread | [rest_client.fail_over] fail_over node ns_1@172.23.100.17 successful 2021-05-13 11:32:12 | INFO | MainProcess | Cluster_Thread | [task.execute] 0 seconds sleep after failover, for nodes to go pending.... 2021-05-13 11:32:12 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:12 | INFO | MainProcess | test_thread | [fts_base.run_fts_query] Running query {"indexName": "default_index_1", "size": 10000000, "from": 0, "explain": false, "query": {"match": "emp", "field": "type"}, "fields": [], "ctl": {"consistency": {"level": "", "vectors": {}}, "timeout": 60000}} on node: 172.23.100.19: 2021-05-13 11:32:12 | ERROR | MainProcess | test_thread | [rest_client._http_request] POST http://172.23.100.19:8094/api/index/default_index_1/query body: b'{"indexName": "default_index_1", "size": 10000000, "from": 0, "explain": false, "query": {"match": "emp", "field": "type"}, "fields": [], "ctl": {"consistency": {"level": "", "vectors": {}}, "timeout": 60000}}' headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: rest_index: Query, indexName: default_index_1, err: pindex not available b'{"error":"rest_index: Query, indexName: default_index_1, err: pindex not available","request":{"ctl":{"consistency":{"level":"","vectors":{}},"timeout":60000},"explain":false,"fields":[],"from":0,"indexName":"default_index_1","query":{"field":"type","match":"emp"},"size":10000000},"status":"fail"}\n' auth: Administrator:password 2021-05-13 11:32:12 | INFO | MainProcess | test_thread | [moving_topology_fts.update_index_during_failover] Hits: -1 2021-05-13 11:32:12 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:12 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:13 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 66587, docs in FTS index 'default_index_1': 0 2021-05-13 11:32:19 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:19 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:19 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 66587, docs in FTS index 'default_index_1': 84900 2021-05-13 11:32:25 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:25 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:25 | INFO | MainProcess | test_thread | [fts_base.wait_for_indexing_complete] Docs in bucket = 100000, docs in FTS index 'default_index_1': 100000 2021-05-13 11:32:25 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:25 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:26 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:26 | INFO | MainProcess | test_thread | [fts_base.validate_index_count] Docs in index default_index_1=100000, bucket docs=100000 2021-05-13 11:32:26 | INFO | MainProcess | test_thread | [rest_client.fetch_bucket_stats] http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute 2021-05-13 11:32:26 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:26 | INFO | MainProcess | test_thread | [fts_base.run_fts_query] Running query {"indexName": "default_index_1", "size": 10000000, "from": 0, "explain": false, "query": {"match": "emp", "field": "type"}, "fields": [], "ctl": {"consistency": {"level": "", "vectors": {}}, "timeout": 60000}} on node: 172.23.100.19: 2021-05-13 11:32:27 | INFO | MainProcess | test_thread | [fts_base.execute_query] SUCCESS! Expected hits: 100000, fts returned: 100000 2021-05-13 11:32:27 | INFO | MainProcess | test_thread | [moving_topology_fts.update_index_during_failover] Hits: 100000 2021-05-13 11:32:27 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:32:27 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:32:27 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:32:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:29 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:32:30 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:32:30 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:30 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [fts_base.tearDown] ==== FTSbasetests cleanup is started for test #1 update_index_during_failover ==== 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [fts_base.delete] Deleting fulltext-index default_index_1 on 172.23.100.19 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [rest_client.get_nodes] Node 172.23.100.17 not part of cluster inactiveFailed 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 172.23.100.19: ls /opt/couchbase/var/lib/couchbase/data/@fts |grep ^default_index_1 | wc -l 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully with root 2021-05-13 11:32:35 | INFO | MainProcess | test_thread | [fts_base.are_index_files_deleted_from_disk] 0 2021-05-13 11:32:37 | INFO | MainProcess | test_thread | [fts_base.delete] Validated: all index files for default_index_1 deleted from disk 2021-05-13 11:32:37 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] removing nodes from cluster ... 2021-05-13 11:32:37 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] cleanup [ip:172.23.100.18 port:8091 ssh_username:root, ip:172.23.100.19 port:8091 ssh_username:root, ip:172.23.100.17 port:8091 ssh_username:root] 2021-05-13 11:32:38 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets ['default'] on 172.23.100.18 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete.... 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.100.18 existing buckets : [] 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 172.23.100.18 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] rebalancing all nodes in order to remove nodes 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.19', 'user': 'Administrator', 'password': 'password'} 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance operation started 2021-05-13 11:32:39 | INFO | MainProcess | test_thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 0.00 % 2021-05-13 11:32:49 | INFO | MainProcess | test_thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 75.00 % 2021-05-13 11:33:09 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] rebalance progress took 30.04 seconds 2021-05-13 11:33:09 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] removed all the nodes from cluster associated with ip:172.23.100.18 port:8091 ssh_username:root ? [('ns_1@172.23.100.17', 8091), ('ns_1@172.23.100.19', 8091)] 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.18:8091 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.18:8091 is running 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.19, nothing to delete 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.19:8091 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.19:8091 is running 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] Could not find any buckets for node 172.23.100.17, nothing to delete 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.100.17:8091 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.is_ns_server_running] -->is_ns_server_running? 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.100.17:8091 is running Cluster instance shutdown with force 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [fts_base.cleanup_cluster] Removing user 'cbadminbucket'... 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [ntonencryptionBase.disable_nton_cluster] Disable up node to node encryption - status = disable and clusterEncryptionLevel = control 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [rest_client.update_autofailover_settings] settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [ntonencryptionBase.change_cluster_encryption_cli] Changing encryption Level - clusterEncryptionLevel = control 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:33:19 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:33:20 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:21 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:33:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:33:21 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:33:21 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:22 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:33:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:33:22 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:33:22 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:23 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:33:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:33:23 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:33:23 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [ntonencryptionBase.change_cluster_encryption_cli] Output of setting-security command is ["ERROR: clusterEncryptionLevel - Can't set cluster encryption level when cluster encryption is disabled."] 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [ntonencryptionBase.change_cluster_encryption_cli] Error of setting-security command is [] 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [ntonencryptionBase.ntonencryption_cli] Changing node-to-node-encryption to disable 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.18 as root 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:24 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.19 as root 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:25 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:33:26 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 2021-05-13 11:33:26 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.17 as root 2021-05-13 11:33:26 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:26 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:26 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:33:27 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 2021-05-13 11:33:27 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.16 as root 2021-05-13 11:33:27 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:27 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:27 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:33:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 2021-05-13 11:33:28 | INFO | MainProcess | test_thread | [remote_util.ssh_connect_with_retries] SSH Connected to 172.23.100.20 as root 2021-05-13 11:33:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] os_distro: CentOS, os_version: centos 7, is_linux_distro: True 2021-05-13 11:33:28 | INFO | MainProcess | test_thread | [remote_util.extract_remote_info] extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 2021-05-13 11:33:28 | INFO | MainProcess | test_thread | [remote_util.execute_couchbase_cli] command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable 2021-05-13 11:33:29 | INFO | MainProcess | test_thread | [ntonencryptionBase.ntonencryption_cli] Output of node-to-node-encryption command is ['Turned off encryption for node: http://[::1]:8091', 'SUCCESS: Switched node-to-node encryption off'] 2021-05-13 11:33:29 | INFO | MainProcess | test_thread | [ntonencryptionBase.ntonencryption_cli] Error of node-to-node-encryption command is [] 2021-05-13 11:33:29 | INFO | MainProcess | test_thread | [fts_base.tearDown] ==== FTSbasetests cleanup is finished for test #1 update_index_during_failover === 2021-05-13 11:33:29 | INFO | MainProcess | test_thread | [fts_base.tearDown] closing all ssh connections 2021-05-13 11:33:29 | INFO | MainProcess | test_thread | [fts_base.tearDown] closing all memcached connections Cluster instance shutdown with force summary so far suite fts.moving_topology_fts.MovingTopFTS , pass 1 , fail 0 testrunner logs, diags and results are available under /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_1 ok ---------------------------------------------------------------------- Ran 1 test in 208.181s OK update_index_during_failover (fts.moving_topology_fts.MovingTopFTS) ... Logs will be stored at /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_2 ./testrunner -i /tmp/testexec.69218.ini -p get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,GROUP=P1,index_type=scorch,fts_quota=990,get-cbcollect-info=True -t fts.moving_topology_fts.MovingTopFTS.update_index_during_failover,items=100000,cluster=D:D+F:D+F,GROUP=P1,index_partitions=20,get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,index_type=scorch,fts_quota=990 Test Input params: {'items': '100000', 'cluster': 'D:D+F:D+F', 'GROUP': 'P1', 'index_partitions': '20', 'get-cbcollect-info': 'True', 'disable_HTP': 'True', 'get-logs': 'False', 'stop-on-failure': 'False', 'index_type': 'scorch', 'fts_quota': '990', 'ini': '/tmp/testexec.69218.ini', 'cluster_name': 'testexec.69218', 'spec': 'py-fts-movingtopology', 'conf_file': 'fts/py-fts-movingtopology.conf', 'num_nodes': 5, 'case_number': 2, 'logs_folder': '/data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_2'} [2021-05-13 11:33:29,143] - [rest_client:3265] INFO - SUCCESS: FTS RAM quota set to 990mb [2021-05-13 11:33:29,143] - [fts_base:3675] INFO - ==== FTSbasetests setup is started for test #2 update_index_during_failover ==== [2021-05-13 11:33:29,157] - [rest_client:3265] INFO - SUCCESS: FTS RAM quota set to 990mb [2021-05-13 11:33:29,157] - [fts_base:2265] INFO - removing nodes from cluster ... [2021-05-13 11:33:29,161] - [fts_base:2267] INFO - cleanup [ip:172.23.100.18 port:8091 ssh_username:root] [2021-05-13 11:33:29,166] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.18, nothing to delete [2021-05-13 11:33:29,170] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:33:29,182] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.18:8091 [2021-05-13 11:33:29,183] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:33:29,186] - [cluster_helper:86] INFO - ns_server @ 172.23.100.18:8091 is running [2021-05-13 11:33:29,186] - [fts_base:2290] INFO - Removing user 'cbadminbucket'... [2021-05-13 11:33:29,194] - [rest_client:1022] ERROR - DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2021-05-13 11:33:29,194] - [fts_base:2294] INFO - b'"User was not found."' [2021-05-13 11:33:29,194] - [fts_base:2208] INFO - Initializing Cluster ... [2021-05-13 11:33:30,148] - [task:152] INFO - server: ip:172.23.100.18 port:8091 ssh_username:root, nodes/self [2021-05-13 11:33:30,151] - [task:157] INFO - {'uptime': '385', 'memoryTotal': 4201684992, 'memoryFree': 3587207168, 'mcdMemoryReserved': 3205, 'mcdMemoryAllocated': 3205, 'status': 'healthy', 'hostname': '172.23.100.18:8091', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-5127-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 2147, 'moxi': 11211, 'memcached': 11210, 'id': 'ns_1@172.23.100.18', 'ip': '172.23.100.18', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 4007, 'curr_items': 0} [2021-05-13 11:33:30,154] - [rest_client:1147] INFO - pools/default params : memoryQuota=2147 [2021-05-13 11:33:30,159] - [rest_client:1045] INFO - --> in init_cluster...Administrator,password,8091 [2021-05-13 11:33:30,159] - [rest_client:1050] INFO - settings/web params on 172.23.100.18:8091:port=8091&username=Administrator&password=password [2021-05-13 11:33:30,208] - [rest_client:1052] INFO - --> status:True [2021-05-13 11:33:30,209] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:33:30,307] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:33:30,559] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:33:30,855] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:33:30,863] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2021-05-13 11:33:30,920] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:33:30,920] - [remote_util:5231] INFO - ['ok'] [2021-05-13 11:33:30,922] - [rest_client:1750] INFO - /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). [2021-05-13 11:33:30,924] - [rest_client:1750] INFO - /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). [2021-05-13 11:33:30,929] - [rest_client:1182] INFO - settings/indexes params : storageMode=plasma [2021-05-13 11:33:30,942] - [fts_base:2225] INFO - 172.23.100.19 will be configured with services kv,fts [2021-05-13 11:33:30,942] - [fts_base:2225] INFO - 172.23.100.17 will be configured with services kv,fts [2021-05-13 11:33:31,943] - [task:769] INFO - adding node 172.23.100.19:8091 to cluster [2021-05-13 11:33:31,943] - [rest_client:1500] INFO - adding remote node @172.23.100.19:8091 to this cluster @172.23.100.18:8091 [2021-05-13 11:33:41,958] - [rest_client:1833] INFO - rebalance progress took 10.01 seconds [2021-05-13 11:33:41,958] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:33:59,518] - [task:769] INFO - adding node 172.23.100.17:8091 to cluster [2021-05-13 11:33:59,518] - [rest_client:1500] INFO - adding remote node @172.23.100.17:8091 to this cluster @172.23.100.18:8091 [2021-05-13 11:34:09,533] - [rest_client:1833] INFO - rebalance progress took 10.01 seconds [2021-05-13 11:34:09,534] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:34:27,091] - [rest_client:1727] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2021-05-13 11:34:27,117] - [rest_client:1732] INFO - rebalance operation started [2021-05-13 11:34:37,130] - [task:839] INFO - Rebalance - status: none, progress: 100.00% [2021-05-13 11:34:37,135] - [task:898] INFO - rebalancing was completed with progress: 100% in 10.018049001693726 sec [2021-05-13 11:34:37,142] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:34:37,241] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:34:37,501] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:37,794] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:37,801] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2021-05-13 11:34:37,856] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:34:37,856] - [remote_util:5231] INFO - ['ok'] [2021-05-13 11:34:37,857] - [fts_base:3947] INFO - Enabled diag/eval for non-local hosts from 172.23.100.18 [2021-05-13 11:34:37,872] - [rest_client:1022] ERROR - DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2021-05-13 11:34:37,873] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2021-05-13 11:34:37,954] - [fts_base:3885] INFO - updating bleve_max_result_window of node : ip:172.23.100.17 port:8091 ssh_username:root [2021-05-13 11:34:37,958] - [rest_client:3407] INFO - {"bleveMaxResultWindow": "100000000"} [2021-05-13 11:34:37,964] - [rest_client:3414] INFO - Updated bleveMaxResultWindow [2021-05-13 11:34:37,964] - [fts_base:3885] INFO - updating bleve_max_result_window of node : ip:172.23.100.19 port:8091 ssh_username:root [2021-05-13 11:34:37,968] - [rest_client:3407] INFO - {"bleveMaxResultWindow": "100000000"} [2021-05-13 11:34:37,974] - [rest_client:3414] INFO - Updated bleveMaxResultWindow [2021-05-13 11:34:38,150] - [rest_client:2818] INFO - http://172.23.100.18:8091/pools/default/buckets with param: name=default&ramQuotaMB=897&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore [2021-05-13 11:34:38,179] - [rest_client:2843] INFO - 0.03 seconds to create bucket default [2021-05-13 11:34:38,179] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:38,805] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:38,882] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:38,955] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:39,099] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:39,174] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:39,247] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:39,325] - [task:380] WARNING - vbucket map not ready after try 0 [2021-05-13 11:34:39,325] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:39,445] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:39,573] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:39,667] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:39,809] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:39,889] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:39,978] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:40,063] - [task:380] WARNING - vbucket map not ready after try 1 [2021-05-13 11:34:40,064] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:40,177] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:40,254] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:40,343] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:40,456] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:40,542] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:40,623] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:40,690] - [task:380] WARNING - vbucket map not ready after try 2 [2021-05-13 11:34:40,690] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:40,820] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:40,898] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:40,997] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:41,130] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:41,230] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:41,310] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:41,381] - [task:380] WARNING - vbucket map not ready after try 3 [2021-05-13 11:34:41,382] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:41,511] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:41,587] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:41,665] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:41,797] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:41,881] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:41,970] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:42,039] - [task:380] WARNING - vbucket map not ready after try 4 [2021-05-13 11:34:42,039] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:34:42,174] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:42,251] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:42,326] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:42,458] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:42,538] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:42,630] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:34:42,697] - [task:380] WARNING - vbucket map not ready after try 5 [2021-05-13 11:34:42,699] - [fts_base:3689] INFO - ==== FTSbasetests setup is finished for test #2 update_index_during_failover ==== [2021-05-13 11:34:42,700] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:34:42,800] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:34:43,055] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:43,353] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:43,354] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:34:43,457] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:34:43,721] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:44,014] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:44,016] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:34:44,117] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:34:44,386] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:44,679] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:44,681] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:34:44,780] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:34:45,041] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:45,336] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:45,338] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:34:45,437] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:34:45,694] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:34:45,985] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:34:57,505] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:34:57,558] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:34:57,608] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:35:19,252] - [fts_base:5164] INFO - Loading phase complete! [2021-05-13 11:35:19,269] - [fts_base:1217] INFO - Checking if index already exists ... [2021-05-13 11:35:19,284] - [rest_client:1022] ERROR - GET http://172.23.100.19:8094/api/index/default_index_1 body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: rest_auth: preparePerms, err: index not found b'{"error":"rest_auth: preparePerms, err: index not found","request":"","status":"fail"}\n' auth: Administrator:password [2021-05-13 11:35:19,286] - [rest_client:1022] ERROR - DELETE http://172.23.100.19:8094/api/index/default_index_1 body: headers: {'Content-Type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 400 reason: rest_auth: preparePerms, err: index not found b'{"error":"rest_auth: preparePerms, err: index not found","request":"","status":"fail"}\n' auth: Administrator:password [2021-05-13 11:35:19,286] - [fts_base:1226] INFO - Creating fulltext-index default_index_1 on 172.23.100.19 [2021-05-13 11:35:19,286] - [rest_client:3290] INFO - {"type": "fulltext-index", "name": "default_index_1", "uuid": "", "params": {"store": {"kvStoreName": "mossStore", "mossStoreOptions": {}, "indexType": "scorch"}}, "sourceType": "couchbase", "sourceName": "default", "sourceUUID": "", "planParams": {"numReplicas": 0, "maxPartitionsPerPIndex": 64, "indexPartitions": 20}, "sourceParams": {}} [2021-05-13 11:35:19,345] - [rest_client:3297] INFO - Index default_index_1 created [2021-05-13 11:35:19,345] - [fts_base:4609] INFO - Validating index distribution for default_index_1 ... [2021-05-13 11:35:19,363] - [fts_base:4438] INFO - sleep for 5 secs. No pindexes found, waiting for index to get created ... [2021-05-13 11:35:24,418] - [fts_base:4627] INFO - Validated: Number of PIndexes = 20 [2021-05-13 11:35:24,425] - [fts_base:4639] INFO - Validated: Every pIndex serves 52 partitions or lesser [2021-05-13 11:35:24,425] - [fts_base:4658] INFO - Validated: pIndexes are distributed across ['d864070edb501b2d5fc857c6efddffcb', '8e2cf4c17326dd6dca6ac0403a773fd9'] [2021-05-13 11:35:24,425] - [fts_base:4664] INFO - Expecting num of partitions in each node in range 460-564 [2021-05-13 11:35:24,425] - [fts_base:4679] INFO - Validated: Node d864070edb501b2d5fc857c6efddffcb houses 10 pindexes which serve 520 partitions [2021-05-13 11:35:24,425] - [fts_base:4679] INFO - Validated: Node 8e2cf4c17326dd6dca6ac0403a773fd9 houses 10 pindexes which serve 504 partitions [2021-05-13 11:35:24,425] - [fts_base:4438] INFO - sleep for 10 secs. ... [2021-05-13 11:35:34,435] - [moving_topology_fts:1493] INFO - Index building has begun... [2021-05-13 11:35:34,696] - [moving_topology_fts:1496] INFO - Index count for default_index_1: 66008 [2021-05-13 11:35:34,765] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:35:35,125] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 66008 [2021-05-13 11:35:35,126] - [fts_base:3469] INFO - Starting failover for nodes:[ip:172.23.100.17 port:8091 ssh_username:root] at C1 cluster 172.23.100.18 [2021-05-13 11:35:35,202] - [fts_base:1246] INFO - Updating fulltext-index default_index_1 on 172.23.100.19 [2021-05-13 11:35:35,202] - [rest_client:3304] INFO - { "type": "fulltext-index", "name": "default_index_1", "uuid": "64f53eb0745539c2", "params": { "store": { "kvStoreName": "mossStore", "mossStoreOptions": {}, "indexType": "scorch" } }, "sourceType": "couchbase", "sourceName": "default", "sourceUUID": "", "planParams": { "numReplicas": 0, "maxPartitionsPerPIndex": 64, "indexPartitions": 20 }, "sourceParams": {} } [2021-05-13 11:35:35,244] - [moving_topology_fts:1514] INFO - {'type': 'fulltext-index', 'name': 'default_index_1', 'uuid': '64f53eb0745539c2', 'sourceType': 'gocbcore', 'sourceName': 'default', 'sourceUUID': '1dcbee9f098930e1928efb4883e9bbad', 'planParams': {'maxPartitionsPerPIndex': 52, 'indexPartitions': 20}, 'params': {'doc_config': {'docid_prefix_delim': '', 'docid_regexp': '', 'mode': 'type_field', 'type_field': 'type'}, 'mapping': {'analysis': {}, 'default_analyzer': 'standard', 'default_datetime_parser': 'dateTimeOptional', 'default_field': '_all', 'default_mapping': {'dynamic': True, 'enabled': True}, 'default_type': '_default', 'docvalues_dynamic': True, 'index_dynamic': True, 'store_dynamic': False, 'type_field': '_type'}, 'store': {'indexType': 'scorch', 'mossStoreOptions': {}, 'segmentVersion': 15}}, 'sourceParams': {}} [2021-05-13 11:35:35,246] - [rest_client:3311] INFO - Index/alias default_index_1 updated [2021-05-13 11:35:35,464] - [task:4057] INFO - Failing over 172.23.100.17:8091 with graceful=False [2021-05-13 11:35:36,407] - [rest_client:1672] INFO - fail_over node ns_1@172.23.100.17 successful [2021-05-13 11:35:36,407] - [task:4037] INFO - 0 seconds sleep after failover, for nodes to go pending.... [2021-05-13 11:35:36,417] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:35:36,417] - [fts_base:2634] INFO - Running query {"indexName": "default_index_1", "size": 10000000, "from": 0, "explain": false, "query": {"match": "emp", "field": "type"}, "fields": [], "ctl": {"consistency": {"level": "", "vectors": {}}, "timeout": 60000}} on node: 172.23.100.19: [2021-05-13 11:35:36,726] - [moving_topology_fts:1518] INFO - Hits: 9917 [2021-05-13 11:35:36,733] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:35:36,741] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:35:37,104] - [fts_base:4496] INFO - Docs in bucket = 66587, docs in FTS index 'default_index_1': 9917 [2021-05-13 11:35:43,119] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:35:43,128] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:35:43,477] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:35:49,493] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:35:49,502] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:35:49,847] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:35:55,862] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:35:55,871] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:35:56,212] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:02,227] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:02,235] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:02,574] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:08,589] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:08,598] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:08,944] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:14,961] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:14,970] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:15,319] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:21,334] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:21,343] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:21,689] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:27,706] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:27,715] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:28,068] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:34,083] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:34,092] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:34,433] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:40,449] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:40,457] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:40,811] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:46,829] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:46,839] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:47,184] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:53,200] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:53,210] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:53,559] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:36:59,575] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:36:59,584] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:36:59,935] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:05,952] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:05,961] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:06,317] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:12,332] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:12,342] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:12,685] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:18,701] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:18,711] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:19,062] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:25,079] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:25,088] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:25,428] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:31,444] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:31,453] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:31,788] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:37,805] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:37,814] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:38,160] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:44,177] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:44,187] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:44,532] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:50,549] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:50,559] - [rest_client:2363] INFO - http://172.23.100.18:8091/pools/default/buckets/default/stats?zoom=minute [2021-05-13 11:37:50,906] - [fts_base:4496] INFO - Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:50,913] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:37:50,917] - [fts_base:4546] INFO - FTS index count not matching bucket count even after 20 tries: Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 [2021-05-13 11:37:50,918] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:37:51,016] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:37:51,268] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:51,556] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:37:51,558] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:37:51,693] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:37:51,959] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:52,251] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:37:52,253] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:37:52,353] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:37:52,611] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:52,893] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:37:52,894] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:37:52,996] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:37:53,255] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:53,551] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:37:53,553] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:37:53,651] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:37:53,907] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:54,197] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:37:58,943] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:37:58,945] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:37:58,946] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:37:58,948] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:37:58,950] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:37:59,043] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:37:59,045] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:37:59,048] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:37:59,048] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:37:59,049] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:37:59,293] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:59,327] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:59,328] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:59,338] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:59,339] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:37:59,630] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 Collecting logs from 172.23.100.16 [2021-05-13 11:37:59,631] - [remote_util:3436] INFO - running command.raw on 172.23.100.16: /opt/couchbase/bin/cbcollect_info 172.23.100.16-20210513-1137-diag.zip [2021-05-13 11:37:59,634] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 Collecting logs from 172.23.100.18 [2021-05-13 11:37:59,634] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: /opt/couchbase/bin/cbcollect_info 172.23.100.18-20210513-1137-diag.zip [2021-05-13 11:37:59,634] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 Collecting logs from 172.23.100.20 [2021-05-13 11:37:59,635] - [remote_util:3436] INFO - running command.raw on 172.23.100.20: /opt/couchbase/bin/cbcollect_info 172.23.100.20-20210513-1137-diag.zip [2021-05-13 11:37:59,639] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 Collecting logs from 172.23.100.17 [2021-05-13 11:37:59,639] - [remote_util:3436] INFO - running command.raw on 172.23.100.17: /opt/couchbase/bin/cbcollect_info 172.23.100.17-20210513-1137-diag.zip [2021-05-13 11:37:59,677] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 Collecting logs from 172.23.100.19 [2021-05-13 11:37:59,678] - [remote_util:3436] INFO - running command.raw on 172.23.100.19: /opt/couchbase/bin/cbcollect_info 172.23.100.19-20210513-1137-diag.zip [2021-05-13 11:40:08,678] - [remote_util:3485] INFO - command executed successfully with root Downloading zipped logs from 172.23.100.17 [2021-05-13 11:40:08,916] - [remote_util:3436] INFO - running command.raw on 172.23.100.17: rm -f /root/172.23.100.17-20210513-1137-diag.zip [2021-05-13 11:40:08,931] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:20,153] - [remote_util:3485] INFO - command executed successfully with root Downloading zipped logs from 172.23.100.20 [2021-05-13 11:40:20,330] - [remote_util:3436] INFO - running command.raw on 172.23.100.20: rm -f /root/172.23.100.20-20210513-1137-diag.zip [2021-05-13 11:40:20,344] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:30,076] - [remote_util:3485] INFO - command executed successfully with root Downloading zipped logs from 172.23.100.16 [2021-05-13 11:40:30,331] - [remote_util:3436] INFO - running command.raw on 172.23.100.16: rm -f /root/172.23.100.16-20210513-1137-diag.zip [2021-05-13 11:40:30,347] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:43,297] - [remote_util:3485] INFO - command executed successfully with root Downloading zipped logs from 172.23.100.18 [2021-05-13 11:40:43,587] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: rm -f /root/172.23.100.18-20210513-1137-diag.zip [2021-05-13 11:40:43,603] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:50,201] - [remote_util:3485] INFO - command executed successfully with root Downloading zipped logs from 172.23.100.19 [2021-05-13 11:40:50,510] - [remote_util:3436] INFO - running command.raw on 172.23.100.19: rm -f /root/172.23.100.19-20210513-1137-diag.zip [2021-05-13 11:40:50,527] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:50,528] - [fts_base:3841] INFO - ==== FTSbasetests cleanup is started for test #2 update_index_during_failover ==== [2021-05-13 11:40:50,538] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:40:50,543] - [fts_base:1312] INFO - Deleting fulltext-index default_index_1 on 172.23.100.19 [2021-05-13 11:40:50,567] - [rest_client:2449] INFO - Node 172.23.100.17 not part of cluster inactiveFailed [2021-05-13 11:40:50,579] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:40:50,728] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:40:51,012] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:40:51,304] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:40:51,304] - [remote_util:3436] INFO - running command.raw on 172.23.100.19: ls /opt/couchbase/var/lib/couchbase/data/@fts |grep ^default_index_1 | wc -l [2021-05-13 11:40:51,319] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:40:51,319] - [fts_base:2170] INFO - 0 [2021-05-13 11:40:53,321] - [fts_base:1323] INFO - Validated: all index files for default_index_1 deleted from disk [2021-05-13 11:40:53,321] - [fts_base:2265] INFO - removing nodes from cluster ... [2021-05-13 11:40:53,327] - [fts_base:2267] INFO - cleanup [ip:172.23.100.18 port:8091 ssh_username:root, ip:172.23.100.19 port:8091 ssh_username:root, ip:172.23.100.17 port:8091 ssh_username:root] [2021-05-13 11:40:53,341] - [bucket_helper:133] INFO - deleting existing buckets ['default'] on 172.23.100.18 [2021-05-13 11:40:54,030] - [bucket_helper:224] INFO - waiting for bucket deletion to complete.... [2021-05-13 11:40:54,033] - [rest_client:137] INFO - node 172.23.100.18 existing buckets : [] [2021-05-13 11:40:54,033] - [bucket_helper:156] INFO - deleted bucket : default from 172.23.100.18 [2021-05-13 11:40:54,037] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:40:54,047] - [cluster_helper:262] INFO - rebalancing all nodes in order to remove nodes [2021-05-13 11:40:54,049] - [rest_client:1727] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.19', 'user': 'Administrator', 'password': 'password'} [2021-05-13 11:40:54,077] - [rest_client:1732] INFO - rebalance operation started [2021-05-13 11:40:54,079] - [rest_client:1894] INFO - rebalance percentage : 0.00 % [2021-05-13 11:41:04,092] - [rest_client:1894] INFO - rebalance percentage : 75.00 % [2021-05-13 11:41:24,117] - [rest_client:1833] INFO - rebalance progress took 30.04 seconds [2021-05-13 11:41:24,117] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:41:34,139] - [cluster_helper:325] INFO - removed all the nodes from cluster associated with ip:172.23.100.18 port:8091 ssh_username:root ? [('ns_1@172.23.100.17', 8091), ('ns_1@172.23.100.19', 8091)] [2021-05-13 11:41:34,143] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.18:8091 [2021-05-13 11:41:34,143] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:34,146] - [cluster_helper:86] INFO - ns_server @ 172.23.100.18:8091 is running [2021-05-13 11:41:34,150] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.19, nothing to delete [2021-05-13 11:41:34,153] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:34,163] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.19:8091 [2021-05-13 11:41:34,163] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:34,166] - [cluster_helper:86] INFO - ns_server @ 172.23.100.19:8091 is running [2021-05-13 11:41:34,171] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.17, nothing to delete [2021-05-13 11:41:34,174] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:34,183] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.17:8091 [2021-05-13 11:41:34,183] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:34,185] - [cluster_helper:86] INFO - ns_server @ 172.23.100.17:8091 is running Cluster instance shutdown with force [2021-05-13 11:41:34,185] - [fts_base:2290] INFO - Removing user 'cbadminbucket'... [2021-05-13 11:41:34,197] - [ntonencryptionBase:112] INFO - Disable up node to node encryption - status = disable and clusterEncryptionLevel = control [2021-05-13 11:41:34,202] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:41:34,207] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:41:34,213] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:41:34,219] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:41:34,224] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:41:34,226] - [ntonencryptionBase:73] INFO - Changing encryption Level - clusterEncryptionLevel = control [2021-05-13 11:41:34,227] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:41:34,328] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:41:34,586] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:34,877] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:34,930] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:41:35,136] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:41:35,235] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:41:35,494] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:35,782] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:35,835] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:41:36,046] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:41:36,147] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:41:36,410] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:36,700] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:36,754] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:41:36,960] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:41:37,061] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:41:37,321] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:37,612] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:37,665] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:41:37,872] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:41:37,973] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:41:38,230] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:38,523] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:38,576] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:41:38,778] - [ntonencryptionBase:95] INFO - Output of setting-security command is ["ERROR: clusterEncryptionLevel - Can't set cluster encryption level when cluster encryption is disabled."] [2021-05-13 11:41:38,778] - [ntonencryptionBase:96] INFO - Error of setting-security command is [] [2021-05-13 11:41:38,778] - [ntonencryptionBase:37] INFO - Changing node-to-node-encryption to disable [2021-05-13 11:41:38,779] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:41:38,877] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:41:39,130] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:39,425] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:39,520] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:41:39,755] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:41:39,853] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:41:40,112] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:40,404] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:40,498] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:41:40,728] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:41:40,826] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:41:41,078] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:41,365] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:41,417] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:41:41,647] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:41:41,746] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:41:41,998] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:42,289] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:42,341] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:41:42,567] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:41:42,665] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:41:42,914] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:43,200] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:43,250] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:41:43,485] - [ntonencryptionBase:58] INFO - Output of node-to-node-encryption command is ['Turned off encryption for node: http://[::1]:8091', 'SUCCESS: Switched node-to-node encryption off'] [2021-05-13 11:41:43,485] - [ntonencryptionBase:59] INFO - Error of node-to-node-encryption command is [] [2021-05-13 11:41:43,485] - [fts_base:3849] INFO - ==== FTSbasetests cleanup is finished for test #2 update_index_during_failover === [2021-05-13 11:41:43,485] - [fts_base:3851] INFO - closing all ssh connections [2021-05-13 11:41:43,486] - [fts_base:3855] INFO - closing all memcached connections downloading 172.23.100.18 ....................................................downloading 172.23.100.19 .......................................downloading 172.23.100.17 .......................................downloading 172.23.100.16 ........................................downloading 172.23.100.20 ........................................FAIL ====================================================================== FAIL: update_index_during_failover (fts.moving_topology_fts.MovingTopFTS) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/fts/fts_base.py", line 4502, in wait_for_indexing_complete self.fail(f"FTS index count not matching bucket count even after {retry} tries: " File "/usr/local/lib/python3.7/unittest/case.py", line 693, in fail raise self.failureException(msg) AssertionError: FTS index count not matching bucket count even after 20 tries: Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 During handling of the above exception, another exception occurred: Traceback (most recent call last): File "pytests/fts/moving_topology_fts.py", line 1521, in update_index_during_failover self.wait_for_indexing_complete() File "pytests/fts/fts_base.py", line 4548, in wait_for_indexing_complete self.fail(e) AssertionError: FTS index count not matching bucket count even after 20 tries: Docs in bucket = 100000, docs in FTS index 'default_index_1': 49205 ---------------------------------------------------------------------- Ran 1 test in 494.358s FAILED (failures=1) suite_tearDown (fts.moving_topology_fts.MovingTopFTS) ... Cluster instance shutdown with force summary so far suite fts.moving_topology_fts.MovingTopFTS , pass 1 , fail 1 failures so far... fts.moving_topology_fts.MovingTopFTS.update_index_during_failover testrunner logs, diags and results are available under /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-27-42/test_2 *** Tests executed count: 2 Run after suite setup for fts.moving_topology_fts.MovingTopFTS.update_index_during_failover [2021-05-13 11:41:47,871] - [rest_client:3265] INFO - SUCCESS: FTS RAM quota set to 990mb [2021-05-13 11:41:47,871] - [fts_base:3675] INFO - ==== FTSbasetests setup is started for test #2 suite_tearDown ==== [2021-05-13 11:41:47,884] - [rest_client:3265] INFO - SUCCESS: FTS RAM quota set to 990mb [2021-05-13 11:41:47,884] - [fts_base:2265] INFO - removing nodes from cluster ... [2021-05-13 11:41:47,888] - [fts_base:2267] INFO - cleanup [ip:172.23.100.18 port:8091 ssh_username:root] [2021-05-13 11:41:47,893] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.18, nothing to delete [2021-05-13 11:41:47,896] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:47,906] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.18:8091 [2021-05-13 11:41:47,906] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:41:47,909] - [cluster_helper:86] INFO - ns_server @ 172.23.100.18:8091 is running [2021-05-13 11:41:47,909] - [fts_base:2290] INFO - Removing user 'cbadminbucket'... [2021-05-13 11:41:47,917] - [rest_client:1022] ERROR - DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2021-05-13 11:41:47,917] - [fts_base:2294] INFO - b'"User was not found."' [2021-05-13 11:41:47,918] - [fts_base:2208] INFO - Initializing Cluster ... [2021-05-13 11:41:48,875] - [task:152] INFO - server: ip:172.23.100.18 port:8091 ssh_username:root, nodes/self [2021-05-13 11:41:48,878] - [task:157] INFO - {'uptime': '880', 'memoryTotal': 4201684992, 'memoryFree': 3560169472, 'mcdMemoryReserved': 3205, 'mcdMemoryAllocated': 3205, 'status': 'healthy', 'hostname': '172.23.100.18:8091', 'clusterCompatibility': 458752, 'clusterMembership': 'active', 'recoveryType': 'none', 'version': '7.0.0-5127-enterprise', 'os': 'x86_64-unknown-linux-gnu', 'ports': [], 'availableStorage': [], 'storage': [], 'memoryQuota': 2147, 'moxi': 11211, 'memcached': 11210, 'id': 'ns_1@172.23.100.18', 'ip': '172.23.100.18', 'rest_username': '', 'rest_password': '', 'port': '8091', 'services': ['kv'], 'storageTotalRam': 4007, 'curr_items': 0} [2021-05-13 11:41:48,881] - [rest_client:1147] INFO - pools/default params : memoryQuota=2147 [2021-05-13 11:41:48,884] - [rest_client:1045] INFO - --> in init_cluster...Administrator,password,8091 [2021-05-13 11:41:48,885] - [rest_client:1050] INFO - settings/web params on 172.23.100.18:8091:port=8091&username=Administrator&password=password [2021-05-13 11:41:48,933] - [rest_client:1052] INFO - --> status:True [2021-05-13 11:41:48,934] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:41:49,033] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:41:49,285] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:41:49,578] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:41:49,586] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2021-05-13 11:41:49,642] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:41:49,642] - [remote_util:5231] INFO - ['ok'] [2021-05-13 11:41:49,644] - [rest_client:1750] INFO - /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). [2021-05-13 11:41:49,646] - [rest_client:1750] INFO - /diag/eval status on 172.23.100.18:8091: True content: [7,0] command: cluster_compat_mode:get_compat_version(). [2021-05-13 11:41:49,652] - [rest_client:1182] INFO - settings/indexes params : storageMode=plasma [2021-05-13 11:41:49,663] - [fts_base:2225] INFO - 172.23.100.19 will be configured with services kv,fts [2021-05-13 11:41:49,663] - [fts_base:2225] INFO - 172.23.100.17 will be configured with services kv,fts [2021-05-13 11:41:50,664] - [task:769] INFO - adding node 172.23.100.19:8091 to cluster [2021-05-13 11:41:50,665] - [rest_client:1500] INFO - adding remote node @172.23.100.19:8091 to this cluster @172.23.100.18:8091 [2021-05-13 11:42:00,679] - [rest_client:1833] INFO - rebalance progress took 10.01 seconds [2021-05-13 11:42:00,680] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:42:18,104] - [task:769] INFO - adding node 172.23.100.17:8091 to cluster [2021-05-13 11:42:18,104] - [rest_client:1500] INFO - adding remote node @172.23.100.17:8091 to this cluster @172.23.100.18:8091 [2021-05-13 11:42:28,119] - [rest_client:1833] INFO - rebalance progress took 10.01 seconds [2021-05-13 11:42:28,119] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:42:45,597] - [rest_client:1727] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': '', 'user': 'Administrator', 'password': 'password'} [2021-05-13 11:42:45,623] - [rest_client:1732] INFO - rebalance operation started [2021-05-13 11:42:55,636] - [task:839] INFO - Rebalance - status: none, progress: 100.00% [2021-05-13 11:42:55,642] - [task:898] INFO - rebalancing was completed with progress: 100% in 10.01854681968689 sec [2021-05-13 11:42:55,649] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:42:55,747] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:42:56,005] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:42:56,299] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:42:56,306] - [remote_util:3436] INFO - running command.raw on 172.23.100.18: curl --silent --show-error http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' [2021-05-13 11:42:56,362] - [remote_util:3485] INFO - command executed successfully with root [2021-05-13 11:42:56,362] - [remote_util:5231] INFO - ['ok'] [2021-05-13 11:42:56,362] - [fts_base:3947] INFO - Enabled diag/eval for non-local hosts from 172.23.100.18 [2021-05-13 11:42:56,375] - [rest_client:1022] ERROR - DELETE http://172.23.100.18:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==', 'Accept': '*/*'} error: 404 reason: unknown b'"User was not found."' auth: Administrator:password [2021-05-13 11:42:56,375] - [internal_user:36] INFO - Exception while deleting user. Exception is -b'"User was not found."' [2021-05-13 11:42:56,454] - [fts_base:3885] INFO - updating bleve_max_result_window of node : ip:172.23.100.17 port:8091 ssh_username:root [2021-05-13 11:42:56,458] - [rest_client:3407] INFO - {"bleveMaxResultWindow": "100000000"} [2021-05-13 11:42:56,463] - [rest_client:3414] INFO - Updated bleveMaxResultWindow [2021-05-13 11:42:56,463] - [fts_base:3885] INFO - updating bleve_max_result_window of node : ip:172.23.100.19 port:8091 ssh_username:root [2021-05-13 11:42:56,468] - [rest_client:3407] INFO - {"bleveMaxResultWindow": "100000000"} [2021-05-13 11:42:56,474] - [rest_client:3414] INFO - Updated bleveMaxResultWindow [2021-05-13 11:42:56,658] - [rest_client:2818] INFO - http://172.23.100.18:8091/pools/default/buckets with param: name=default&ramQuotaMB=897&replicaNumber=1&bucketType=membase&replicaIndex=1&threadsNumber=3&flushEnabled=1&evictionPolicy=valueOnly&compressionMode=passive&storageBackend=couchstore [2021-05-13 11:42:56,686] - [rest_client:2843] INFO - 0.03 seconds to create bucket default [2021-05-13 11:42:56,686] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:42:57,318] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:57,394] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:57,468] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:57,736] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:57,818] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:57,892] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:57,971] - [task:380] WARNING - vbucket map not ready after try 0 [2021-05-13 11:42:57,971] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:42:58,113] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:58,354] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:58,439] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:58,570] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:58,646] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:58,719] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:58,927] - [task:380] WARNING - vbucket map not ready after try 1 [2021-05-13 11:42:58,927] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:42:59,038] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:59,112] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:59,186] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:59,440] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:59,518] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:42:59,598] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:42:59,669] - [task:380] WARNING - vbucket map not ready after try 2 [2021-05-13 11:42:59,669] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:42:59,780] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:42:59,853] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:00,063] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:00,177] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:43:00,249] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:00,324] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:00,396] - [task:380] WARNING - vbucket map not ready after try 3 [2021-05-13 11:43:00,396] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:43:00,660] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:43:00,739] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:00,817] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:00,940] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:43:01,162] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:01,238] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:01,309] - [task:380] WARNING - vbucket map not ready after try 4 [2021-05-13 11:43:01,309] - [bucket_helper:335] INFO - waiting for memcached bucket : default in 172.23.100.18 to accept set ops [2021-05-13 11:43:01,426] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:43:01,499] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:01,732] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:01,850] - [data_helper:314] INFO - creating direct client 172.23.100.17:11210 default [2021-05-13 11:43:01,929] - [data_helper:314] INFO - creating direct client 172.23.100.18:11210 default [2021-05-13 11:43:02,004] - [data_helper:314] INFO - creating direct client 172.23.100.19:11210 default [2021-05-13 11:43:02,077] - [task:380] WARNING - vbucket map not ready after try 5 [2021-05-13 11:43:02,079] - [fts_base:3689] INFO - ==== FTSbasetests setup is finished for test #2 suite_tearDown ==== [2021-05-13 11:43:02,081] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:43:02,181] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:43:02,438] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:02,730] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:02,732] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:43:02,834] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:43:03,096] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:03,390] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:03,391] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:43:03,490] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:43:03,750] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:04,040] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:04,042] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:43:04,140] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:43:04,402] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:04,698] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:04,700] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:43:04,804] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:43:05,056] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:05,345] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:09,737] - [moving_topology_fts:30] INFO - *** MovingTopFTS: suite_tearDown() *** [2021-05-13 11:43:09,738] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:43:09,838] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:43:10,091] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:10,388] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:10,389] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:43:10,486] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:43:10,742] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:11,031] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:11,032] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:43:11,133] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:43:11,393] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:11,682] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:11,684] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:43:11,782] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:43:12,036] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:12,329] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:12,331] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:43:12,431] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:43:12,688] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:12,981] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:17,381] - [fts_base:3841] INFO - ==== FTSbasetests cleanup is started for test #2 suite_tearDown ==== [2021-05-13 11:43:17,381] - [fts_base:2265] INFO - removing nodes from cluster ... [2021-05-13 11:43:17,387] - [fts_base:2267] INFO - cleanup [ip:172.23.100.18 port:8091 ssh_username:root, ip:172.23.100.19 port:8091 ssh_username:root, ip:172.23.100.17 port:8091 ssh_username:root] [2021-05-13 11:43:17,403] - [bucket_helper:133] INFO - deleting existing buckets ['default'] on 172.23.100.18 [2021-05-13 11:43:18,089] - [bucket_helper:224] INFO - waiting for bucket deletion to complete.... [2021-05-13 11:43:18,092] - [rest_client:137] INFO - node 172.23.100.18 existing buckets : [] [2021-05-13 11:43:18,092] - [bucket_helper:156] INFO - deleted bucket : default from 172.23.100.18 [2021-05-13 11:43:18,096] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:18,105] - [cluster_helper:262] INFO - rebalancing all nodes in order to remove nodes [2021-05-13 11:43:18,107] - [rest_client:1727] INFO - rebalance params : {'knownNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.18,ns_1@172.23.100.19', 'ejectedNodes': 'ns_1@172.23.100.17,ns_1@172.23.100.19', 'user': 'Administrator', 'password': 'password'} [2021-05-13 11:43:18,135] - [rest_client:1732] INFO - rebalance operation started [2021-05-13 11:43:18,136] - [rest_client:1894] INFO - rebalance percentage : 0.00 % [2021-05-13 11:43:28,145] - [rest_client:1894] INFO - rebalance percentage : 66.00 % [2021-05-13 11:43:48,170] - [rest_client:1833] INFO - rebalance progress took 30.04 seconds [2021-05-13 11:43:48,170] - [rest_client:1834] INFO - sleep for 10 seconds after rebalance... [2021-05-13 11:43:58,192] - [cluster_helper:325] INFO - removed all the nodes from cluster associated with ip:172.23.100.18 port:8091 ssh_username:root ? [('ns_1@172.23.100.17', 8091), ('ns_1@172.23.100.19', 8091)] [2021-05-13 11:43:58,196] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.18:8091 [2021-05-13 11:43:58,196] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:58,199] - [cluster_helper:86] INFO - ns_server @ 172.23.100.18:8091 is running [2021-05-13 11:43:58,204] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.19, nothing to delete [2021-05-13 11:43:58,207] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:58,216] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.19:8091 [2021-05-13 11:43:58,216] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:58,219] - [cluster_helper:86] INFO - ns_server @ 172.23.100.19:8091 is running [2021-05-13 11:43:58,223] - [bucket_helper:158] INFO - Could not find any buckets for node 172.23.100.17, nothing to delete [2021-05-13 11:43:58,226] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:58,236] - [cluster_helper:82] INFO - waiting for ns_server @ 172.23.100.17:8091 [2021-05-13 11:43:58,236] - [rest_client:41] INFO - -->is_ns_server_running? [2021-05-13 11:43:58,239] - [cluster_helper:86] INFO - ns_server @ 172.23.100.17:8091 is running Cluster instance shutdown with force [2021-05-13 11:43:58,239] - [fts_base:2290] INFO - Removing user 'cbadminbucket'... [2021-05-13 11:43:58,249] - [ntonencryptionBase:112] INFO - Disable up node to node encryption - status = disable and clusterEncryptionLevel = control [2021-05-13 11:43:58,254] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:43:58,259] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:43:58,264] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:43:58,270] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:43:58,276] - [rest_client:2933] INFO - settings/autoFailover params : timeout=120&enabled=false&failoverOnDataDiskIssues%5Benabled%5D=false&maxCount=1&failoverServerGroup=false [2021-05-13 11:43:58,278] - [ntonencryptionBase:73] INFO - Changing encryption Level - clusterEncryptionLevel = control [2021-05-13 11:43:58,279] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:43:58,379] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:43:58,778] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:43:59,074] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:43:59,170] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:43:59,379] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:43:59,478] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:43:59,740] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:00,029] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:00,083] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:44:00,294] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:44:00,393] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:44:00,654] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:00,940] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:00,991] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:44:01,192] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:44:01,290] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:44:01,546] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:01,836] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:01,928] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:44:02,136] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:44:02,233] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:44:02,481] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:02,766] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:02,856] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli setting-security -c http://localhost -u Administrator -p password --set --cluster-encryption-level control [2021-05-13 11:44:03,062] - [ntonencryptionBase:95] INFO - Output of setting-security command is ["ERROR: clusterEncryptionLevel - Can't set cluster encryption level when cluster encryption is disabled."] [2021-05-13 11:44:03,062] - [ntonencryptionBase:96] INFO - Error of setting-security command is [] [2021-05-13 11:44:03,062] - [ntonencryptionBase:37] INFO - Changing node-to-node-encryption to disable [2021-05-13 11:44:03,063] - [remote_util:299] INFO - SSH Connecting to 172.23.100.18 with username:root, attempt#1 of 5 [2021-05-13 11:44:03,161] - [remote_util:335] INFO - SSH Connected to 172.23.100.18 as root [2021-05-13 11:44:03,425] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:03,720] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:03,773] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:44:04,009] - [remote_util:299] INFO - SSH Connecting to 172.23.100.19 with username:root, attempt#1 of 5 [2021-05-13 11:44:04,108] - [remote_util:335] INFO - SSH Connected to 172.23.100.19 as root [2021-05-13 11:44:04,366] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:04,660] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:04,717] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:44:04,955] - [remote_util:299] INFO - SSH Connecting to 172.23.100.17 with username:root, attempt#1 of 5 [2021-05-13 11:44:05,054] - [remote_util:335] INFO - SSH Connected to 172.23.100.17 as root [2021-05-13 11:44:05,311] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:05,601] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:05,652] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:44:05,886] - [remote_util:299] INFO - SSH Connecting to 172.23.100.16 with username:root, attempt#1 of 5 [2021-05-13 11:44:05,984] - [remote_util:335] INFO - SSH Connected to 172.23.100.16 as root [2021-05-13 11:44:06,236] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:06,526] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:06,578] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:44:06,808] - [remote_util:299] INFO - SSH Connecting to 172.23.100.20 with username:root, attempt#1 of 5 [2021-05-13 11:44:06,905] - [remote_util:335] INFO - SSH Connected to 172.23.100.20 as root [2021-05-13 11:44:07,151] - [remote_util:3598] INFO - os_distro: CentOS, os_version: centos 7, is_linux_distro: True [2021-05-13 11:44:07,436] - [remote_util:3754] INFO - extract_remote_info-->distribution_type: CentOS, distribution_version: centos 7 [2021-05-13 11:44:07,485] - [remote_util:4706] INFO - command to run: /opt/couchbase/bin/couchbase-cli node-to-node-encryption -c http://localhost -u Administrator -p password --disable [2021-05-13 11:44:07,709] - [ntonencryptionBase:58] INFO - Output of node-to-node-encryption command is ['Turned off encryption for node: http://[::1]:8091', 'SUCCESS: Switched node-to-node encryption off'] [2021-05-13 11:44:07,709] - [ntonencryptionBase:59] INFO - Error of node-to-node-encryption command is [] [2021-05-13 11:44:07,709] - [fts_base:3849] INFO - ==== FTSbasetests cleanup is finished for test #2 suite_tearDown === [2021-05-13 11:44:07,709] - [fts_base:3851] INFO - closing all ssh connections [2021-05-13 11:44:07,710] - [fts_base:3855] INFO - closing all memcached connections *** TestRunner *** workspace is /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1 fails is 1 2 Desc1: 7.0.0-5127 - fts moving - centos (1/2) python3 scripts/rerun_jobs.py 7.0.0-5127 --executor_jenkins_job --run_params=get-cbcollect-info=False,disable_HTP=True,get-logs=False,stop-on-failure=False,GROUP=P1,index_type=scorch,fts_quota=990,get-cbcollect-info=True INFO:merge_reports:Merging of report files from ['Old_Report_mergedreport-21-May-11_22-22-50-fts.moving_topology_fts.MovingTopFTS.xml', 'logs/**/*.xml'] INFO:merge_reports:-- Old_Report_mergedreport-21-May-11_22-22-50-fts.moving_topology_fts.MovingTopFTS.xml -- INFO:merge_reports:-- logs/testrunner-21-May-13_11-27-42/report-21-May-13_11-27-42-fts.moving_topology_fts.MovingTopFTS.xml -- INFO:merge_reports: Number of TestSuites=1 INFO:merge_reports: TestSuite#1) fts.moving_topology_fts.MovingTopFTS, Number of Tests=41 INFO:merge_reports:Summary file is at /data/workspace/centos-p0-fts-vset00-00-moving-topology-scorch_5.5_P1/logs/testrunner-21-May-13_11-44-08/merged_summary/mergedreport-21-May-13_11-44-08-fts.moving_topology_fts.MovingTopFTS.xml Merging xmls summary so far suite fts.moving_topology_fts.MovingTopFTS , pass 40 , fail 1 failures so far... fts.moving_topology_fts.MovingTopFTS.update_index_during_failover merged xmls No more failed tests. Stopping reruns [description-setter] Description set: 7.0.0-5127 - fts moving - centos (1/2) [EnvInject] - Injecting environment variables from a build step. [EnvInject] - Injecting as environment variables the properties file path 'propfile' [EnvInject] - Variables injected successfully. Archiving artifacts Recording test results Build step 'Publish JUnit test result report' changed build result to UNSTABLE [BFA] Scanning build for known causes... [BFA] No failure causes found [BFA] Done. 0s Notifying upstream projects of job completion Email was triggered for: Unstable (Test Failures) Sending email for trigger: Unstable (Test Failures) #344337 is still in progress; ignoring for purposes of comparison #344337 is still in progress; ignoring for purposes of comparison Sending email to: girish.benakappa@couchbase.com Triggering a new build of savejoblogs Triggering a new build of test-executor-cleanup Finished: UNSTABLE