Started by user Balakumaran G [EnvInject] - Loading node environment variables. Building remotely on component-systest-client-2 (component_system_test_2) in workspace /data/workspace/centos-systest-launcher-2 [WS-CLEANUP] Deleting project workspace... [WS-CLEANUP] Done Cloning the remote Git repository Cloning repository https://github.com/couchbaselabs/sequoia.git > /usr/bin/git init /data/workspace/centos-systest-launcher-2 # timeout=10 Fetching upstream changes from https://github.com/couchbaselabs/sequoia.git > /usr/bin/git --version # timeout=10 > /usr/bin/git fetch --tags --progress https://github.com/couchbaselabs/sequoia.git +refs/heads/*:refs/remotes/origin/* > /usr/bin/git config remote.origin.url https://github.com/couchbaselabs/sequoia.git # timeout=10 > /usr/bin/git config --add remote.origin.fetch +refs/heads/*:refs/remotes/origin/* # timeout=10 > /usr/bin/git config remote.origin.url https://github.com/couchbaselabs/sequoia.git # timeout=10 Fetching upstream changes from https://github.com/couchbaselabs/sequoia.git > /usr/bin/git fetch --tags --progress https://github.com/couchbaselabs/sequoia.git +refs/heads/*:refs/remotes/origin/* > /usr/bin/git rev-parse refs/remotes/origin/master^{commit} # timeout=10 > /usr/bin/git rev-parse refs/remotes/origin/origin/master^{commit} # timeout=10 Checking out Revision 43f82c03599b26a99427d2f26a3e92e9bd71e8e8 (refs/remotes/origin/master) > /usr/bin/git config core.sparsecheckout # timeout=10 > /usr/bin/git checkout -f 43f82c03599b26a99427d2f26a3e92e9bd71e8e8 > /usr/bin/git rev-list 43f82c03599b26a99427d2f26a3e92e9bd71e8e8 # timeout=10 [centos-systest-launcher-2] $ /bin/sh -xe /tmp/jenkins3957256820237459615.sh + cd /root/sequoia-provision/ + git stash No local changes to save + git pull Already up-to-date. + git log -3 commit 325fc28ea9ee4058dc43fb47d3bd1a740e590a9b Merge: 99e1eb6 a700dd2 Author: admin Date: Mon Oct 30 23:19:43 2023 -0700 Merge branch 'master' of https://github.com/couchbaselabs/sequoia-provision commit a700dd2c5a93540cd2ecdd8cab971bae759d7b8d Author: Balakumaran G Date: Mon Oct 30 20:06:19 2023 +0530 Update hosts commit 99e1eb6ef17c4ccd9b13edc2e49477bde34d7ab8 Merge: ff482bf a8d435d Author: admin Date: Fri Oct 27 07:23:29 2023 -0700 Merge branch 'master' of https://github.com/couchbaselabs/sequoia-provision + cd /root/sequoia-provision/centos2 + export ANSIBLE_HOST_KEY_CHECKING=False + ANSIBLE_HOST_KEY_CHECKING=False ++ echo 7.6.0-1793 ++ sed 's/-.*//' + VER=7.6.0 ++ echo 7.6.0-1793 ++ sed 's/.*-//' + BUILD=1793 ++ echo 2.8.0-374 ++ sed 's/-.*//' + SGW_VER=2.8.0 ++ echo 2.8.0-374 ++ sed 's/.*-//' + SGW_BUILD_NO=374 + FLAVOR=sherlock + echo 7.6.0-1793 + grep '4\.5' + echo 7.6.0-1793 + grep '4\.6' + echo 7.6.0-1793 + grep '5\.0' + echo 7.6.0-1793 + grep '5\.1' + echo 7.6.0-1793 + grep '5\.5' + echo 7.6.0-1793 + grep '6\.0' 7.6.0-1793 + FLAVOR=alice + echo 7.6.0-1793 + grep '6\.5' + echo 7.6.0-1793 + grep '6\.6' + echo 7.6.0-1793 + grep '7\.0' + echo 7.6.0-1793 + grep '7\.1' + echo 7.6.0-1793 + grep '7\.2' + echo 7.6.0-1793 + grep '7\.6' 7.6.0-1793 + FLAVOR=trinity + BUILD_OPTS='-e FLAVOR=trinity -e VER=7.6.0 -e BUILD_NO=1793 -e SGW_VER=2.8.0 -e SGW_BUILD_NO=374' + [[ ================= == http* ]] + INVENTORY='-i ../ansible/hosts' + '[' true = false ']' [centos-systest-launcher-2] $ /bin/sh -xe /tmp/jenkins1850313449149940165.sh + rm 'logs/*' rm: cannot remove ‘logs/*’: No such file or directory + true + rm results.tap4j rm: cannot remove ‘results.tap4j’: No such file or directory + true + touch logs/__empty__.zip + touch results.tap4j + ulimit -n 90600 + echo 'DESC: 7.6.0-1793, -test tests/integration/7.6/test_7.6.yml -scope tests/integration/7.6/scope_7.6_magma.yml @1.x (file:centos_second_cluster.yml provider)' DESC: 7.6.0-1793, -test tests/integration/7.6/test_7.6.yml -scope tests/integration/7.6/scope_7.6_magma.yml @1.x (file:centos_second_cluster.yml provider) + export GOROOT=/usr/local/go/ + GOROOT=/usr/local/go/ + export PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/go/bin:/root/go/bin:/opt/godev/bin:/usr/local/go/bin + PATH=/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/usr/local/go/bin:/root/go/bin:/opt/godev/bin:/usr/local/go/bin + export PROJECT=couchbaselabs + PROJECT=couchbaselabs + export GOPATH=/opt/godev + GOPATH=/opt/godev + export GO111MODULE=off + GO111MODULE=off + go get -u github.com/couchbaselabs/sequoia # cd /opt/godev/src/github.com/fatih/color; git pull --ff-only Your configuration specifies to merge with the ref 'master' from the remote, but no such ref was fetched. package github.com/fatih/color: exit status 1 # cd /opt/godev/src/github.com/go-ini/ini; git pull --ff-only Your configuration specifies to merge with the ref 'master' from the remote, but no such ref was fetched. package github.com/go-ini/ini: exit status 1 + true + cd /opt/godev/src/github.com/couchbaselabs/sequoia + git stash No local changes to save + git checkout master Already on 'master' Your branch is ahead of 'origin/master' by 2 commits. (use "git push" to publish your local commits) + git reset --hard origin/master HEAD is now at 43f82c0 intorduce ff again + git pull Already up-to-date. + git log -3 commit 43f82c03599b26a99427d2f26a3e92e9bd71e8e8 Author: Balakumaran G Date: Mon Nov 6 08:39:51 2023 +0530 intorduce ff again Change-Id: Ic1ddda47e847c5634bd62605cb7effb9184f2d6c Reviewed-on: https://review.couchbase.org/c/sequoia/+/200123 Reviewed-by: Raghav S K Tested-by: Balakumaran G commit ff906fede28a98356f1ad46681e0aae71a3900cd Author: Balakumaran G Date: Fri Nov 3 18:37:38 2023 +0530 move flaky node the end of the provider Change-Id: Ieb6c2d768dad18467ff82e90d4a5d5a6932aa20e Reviewed-on: https://review.couchbase.org/c/sequoia/+/200057 Tested-by: Balakumaran G Reviewed-by: Balakumaran G Reviewed-by: commit 485432659b31fe1c8b227e5425e9f48d812bcbab Author: Balakumaran G Date: Thu Nov 2 12:23:19 2023 +0530 Reduce AF timeout Change-Id: Ie4bd94642720567fac6ac3ee8ee0d7248d3b2696 Reviewed-on: https://review.couchbase.org/c/sequoia/+/199959 Reviewed-by: Balakumaran G Reviewed-by: Sujay Gad Tested-by: Balakumaran G + '[' None '!=' None ']' + cd /opt/godev/src/github.com/fsouza/go-dockerclient + git reset --hard bda2dedfde2e0bde058f61c1bacf68421c4dd331 HEAD is now at bda2ded build(deps): bump golang.org/x/term from 0.10.0 to 0.11.0 (#1003) + cd - /opt/godev/src/github.com/couchbaselabs/sequoia + cd /opt/godev/src/github.com/docker/docker/vendor/github.com/klauspost/compress + git reset --hard 452ca90fe5f010f96fd50bb0fa9622e6e3d1e50d HEAD is now at 452ca90 Merge pull request #46698 from thaJeztah/update_gowinres + cd - /opt/godev/src/github.com/couchbaselabs/sequoia + go build -o sequoia + pwd /opt/godev/src/github.com/couchbaselabs/sequoia + EXTOPTS='-skip_setup=false -skip_test=false -skip_teardown=true -skip_cleanup=false -continue=false -collect_on_error=false -stop_on_error=false -duration=1209600 -show_topology=true' + [[ -n '' ]] + git fetch https://review.couchbase.org/sequoia refs/changes/29/197029/3 From https://review.couchbase.org/sequoia * branch refs/changes/29/197029/3 -> FETCH_HEAD + git cherry-pick FETCH_HEAD [master 75d0d4b] temp test Author: Balakumaran G 5 files changed, 34 insertions(+), 8 deletions(-) create mode 100644 .idea/misc.xml create mode 100644 .idea/modules.xml create mode 100644 .idea/sequoia.iml create mode 100644 .idea/vcs.xml + git fetch https://review.couchbase.org/sequoia refs/changes/28/198628/5 From https://review.couchbase.org/sequoia * branch refs/changes/28/198628/5 -> FETCH_HEAD + git cherry-pick FETCH_HEAD [master 76230e2] temp test Author: Balakumaran G 1 file changed, 8 insertions(+), 8 deletions(-) + ./sequoia -client 172.23.104.168:2375 -provider file:centos_second_cluster.yml -test tests/integration/7.6/test_7.6.yml -scope tests/integration/7.6/scope_7.6_magma.yml -scale 1 -repeat 0 -log_level 0 -version 7.6.0-1793 -skip_setup=false -skip_test=false -skip_teardown=true -skip_cleanup=false -continue=false -collect_on_error=false -stop_on_error=false -duration=1209600 -show_topology=true → parsed tests/integration/7.6/scope_7.6_magma.yml → parsed tests/integration/7.6/test_7.6.yml → remove /keen_yonath → remove /elegant_torvalds → remove /compassionate_brattain → remove /sleepy_goldberg → remove /agitated_brattain → remove /boring_sammet → remove /eloquent_meninsky → remove /adoring_hawking → remove /eloquent_northcutt → remove /stupefied_swanson → remove /sleepy_torvalds → parsed providers/file/centos_second_cluster.yml [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [pull] martin/wait [2023-11-14T08:19:01-08:00, martin/wait:ada7da] -c 172.23.120.73:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:c22bb7] -c 172.23.97.74:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:c7702b] -c 172.23.96.14:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:d7a0ae] -c 172.23.96.243:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:fb78fa] -c 172.23.97.148:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:ebe4e1] -c 172.23.120.74:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:4e5491] -c 172.23.96.48:8091 -t 120 [2023-11-14T08:19:01-08:00, martin/wait:b208d7] -c 172.23.106.137:8091 -t 120 [2023-11-14T08:19:02-08:00, martin/wait:a386fb] -c 172.23.97.150:8091 -t 120 [2023-11-14T08:19:02-08:00, martin/wait:9a3f11] -c 172.23.123.32:8091 -t 120 [2023-11-14T08:19:02-08:00, martin/wait:e8f854] -c 172.23.97.110:8091 -t 120 [2023-11-14T08:19:02-08:00, martin/wait:c3ade3] -c 172.23.121.77:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:c608f1] -c 172.23.120.77:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:405c01] -c 172.23.120.75:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:911b60] -c 172.23.97.105:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:b8abca] -c 172.23.106.136:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:399f95] -c 172.23.123.25:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:f7a449] -c 172.23.97.241:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:8d0d51] -c 172.23.96.122:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:c95c06] -c 172.23.120.58:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:9c228d] -c 172.23.123.33:8091 -t 120 [2023-11-14T08:19:03-08:00, martin/wait:655b30] -c 172.23.120.81:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:53a53f] -c 172.23.96.254:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:50b43c] -c 172.23.120.86:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:5d4a52] -c 172.23.123.26:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:c2dd54] -c 172.23.97.149:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:92d602] -c 172.23.106.134:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:bd2083] -c 172.23.123.31:8091 -t 120 [2023-11-14T08:19:04-08:00, martin/wait:ccb7ee] -c 172.23.97.112:8091 -t 120 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:0f0d3d] node-init -c 172.23.96.254 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:0a32c9] node-init -c 172.23.96.48 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:4400c9] node-init -c 172.23.97.74 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:3e13f7] node-init -c 172.23.106.137 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:097d98] node-init -c 172.23.120.73 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:d15f7d] node-init -c 172.23.97.105 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:3b1755] node-init -c 172.23.97.241 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:1bd0f6] node-init -c 172.23.97.148 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:09-08:00, sequoiatools/couchbase-cli:7.6:ccd5d1] node-init -c 172.23.97.149 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:10-08:00, sequoiatools/couchbase-cli:7.6:8dcd7d] node-init -c 172.23.120.81 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:10-08:00, sequoiatools/couchbase-cli:7.6:46e71f] node-init -c 172.23.97.112 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:10-08:00, sequoiatools/couchbase-cli:7.6:cecb42] node-init -c 172.23.123.32 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:10-08:00, sequoiatools/couchbase-cli:7.6:1523b1] node-init -c 172.23.120.74 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:10-08:00, sequoiatools/couchbase-cli:7.6:b7994f] node-init -c 172.23.121.77 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:d6a8d9] node-init -c 172.23.97.150 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:ef9085] node-init -c 172.23.106.136 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:260c9a] node-init -c 172.23.123.25 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:948b94] node-init -c 172.23.96.243 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:e6b196] node-init -c 172.23.120.77 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:200626] node-init -c 172.23.120.86 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:2db096] node-init -c 172.23.97.110 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:68552c] node-init -c 172.23.106.134 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:63d110] node-init -c 172.23.96.122 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:8a3b0e] node-init -c 172.23.123.31 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:f878fd] node-init -c 172.23.123.26 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:11-08:00, sequoiatools/couchbase-cli:7.6:1af1ad] node-init -c 172.23.96.14 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:12-08:00, sequoiatools/couchbase-cli:7.6:593a8e] node-init -c 172.23.120.58 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:12-08:00, sequoiatools/couchbase-cli:7.6:efe053] node-init -c 172.23.123.33 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [2023-11-14T08:19:12-08:00, sequoiatools/couchbase-cli:7.6:065730] node-init -c 172.23.120.75 -u Administrator -p password --node-init-data-path /data/couchbase --node-init-index-path /data/couchbase --node-init-analytics-path /data/couchbase [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:17-08:00, sequoiatools/couchbase-cli:7.6:3bd56d] cluster-init -c 172.23.97.74 --cluster-username Administrator --cluster-password password --cluster-port 8091 --cluster-ramsize 11970 --services data --cluster-index-ramsize 19152 --cluster-fts-ramsize 19152 --index-storage-setting default --cluster-analytics-ramsize 21546 --cluster-eventing-ramsize 21546 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[cluster-init -c 172.23.97.74 --cluster-username Administrator --cluster-password password --cluster-port 8091 --cluster-ramsize 11970 --services data --cluster-index-ramsize 19152 --cluster-fts-ramsize 19152 --index-storage-setting default --cluster-analytics-ramsize 21546 --cluster-eventing-ramsize 21546] docker logs 3bd56d docker start 3bd56d NERROR: Cluster is already initialized, use setting-cluster to change settings [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:20-08:00, sequoiatools/couchbase-cli:7.6:98547a] cluster-init -c 172.23.106.136 --cluster-username Administrator --cluster-password password --cluster-port 8091 --cluster-ramsize 14353 --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[cluster-init -c 172.23.106.136 --cluster-username Administrator --cluster-password password --cluster-port 8091 --cluster-ramsize 14353 --services data] docker logs 98547a docker start 98547a NERROR: Cluster is already initialized, use setting-cluster to change settings [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:23-08:00, sequoiatools/couchbase-cli:7.6:5134a8] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username default --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:26-08:00, sequoiatools/couchbase-cli:7.6:bc2cbf] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username WAREHOUSE --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:29-08:00, sequoiatools/couchbase-cli:7.6:10e44e] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username NEW_ORDER --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:32-08:00, sequoiatools/couchbase-cli:7.6:85a2f1] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username ITEM --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:34-08:00, sequoiatools/couchbase-cli:7.6:340372] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket4 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:37-08:00, sequoiatools/couchbase-cli:7.6:1dc76b] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket5 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:40-08:00, sequoiatools/couchbase-cli:7.6:4b5d4b] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket6 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:43-08:00, sequoiatools/couchbase-cli:7.6:2753c3] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket7 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:45-08:00, sequoiatools/couchbase-cli:7.6:102877] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket8 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:48-08:00, sequoiatools/couchbase-cli:7.6:45f0c2] user-manage -c 172.23.97.74 -u Administrator -p password --rbac-username bucket9 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:51-08:00, sequoiatools/couchbase-cli:7.6:78a509] user-manage -c 172.23.106.136 -u Administrator -p password --rbac-username remote --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:54-08:00, sequoiatools/couchbase-cli:7.6:288cf5] user-manage -c 172.23.106.136 -u Administrator -p password --rbac-username bucket4 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:19:57-08:00, sequoiatools/couchbase-cli:7.6:0c3cae] user-manage -c 172.23.106.136 -u Administrator -p password --rbac-username bucket8 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:00-08:00, sequoiatools/couchbase-cli:7.6:8d0984] user-manage -c 172.23.106.136 -u Administrator -p password --rbac-username bucket9 --rbac-password password --roles admin --auth-domain local --set [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:03-08:00, sequoiatools/couchbase-cli:7.6:f8f91c] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.14 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.14 --server-add-username Administrator --server-add-password password --services data] docker logs f8f91c docker start f8f91c =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:06-08:00, sequoiatools/couchbase-cli:7.6:f79e19] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.241 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.241 --server-add-username Administrator --server-add-password password --services data] docker logs f79e19 docker start f79e19 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:09-08:00, sequoiatools/couchbase-cli:7.6:7b1d13] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.48 --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:20-08:00, sequoiatools/couchbase-cli:7.6:d865e4] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.122 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.122 --server-add-username Administrator --server-add-password password --services data] docker logs d865e4 docker start d865e4 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:23-08:00, sequoiatools/couchbase-cli:7.6:f03b77] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.73 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.73 --server-add-username Administrator --server-add-password password --services data] docker logs f03b77 docker start f03b77 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:26-08:00, sequoiatools/couchbase-cli:7.6:71b681] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.121.77 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.121.77 --server-add-username Administrator --server-add-password password --services data] docker logs 71b681 docker start 71b681 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:29-08:00, sequoiatools/couchbase-cli:7.6:007400] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.25 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.25 --server-add-username Administrator --server-add-password password --services data] docker logs 007400 docker start 007400 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:33-08:00, sequoiatools/couchbase-cli:7.6:19a327] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.26 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.26 --server-add-username Administrator --server-add-password password --services data] docker logs 19a327 docker start 19a327 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:36-08:00, sequoiatools/couchbase-cli:7.6:852daa] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.77 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.77 --server-add-username Administrator --server-add-password password --services data] docker logs 852daa docker start 852daa =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:39-08:00, sequoiatools/couchbase-cli:7.6:128995] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.86 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.86 --server-add-username Administrator --server-add-password password --services data] docker logs 128995 docker start 128995 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:43-08:00, sequoiatools/couchbase-cli:7.6:bffac6] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.74 --server-add-username Administrator --server-add-password password --services analytics → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.74 --server-add-username Administrator --server-add-password password --services analytics] docker logs bffac6 docker start bffac6 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:46-08:00, sequoiatools/couchbase-cli:7.6:a4957e] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.75 --server-add-username Administrator --server-add-password password --services analytics → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.75 --server-add-username Administrator --server-add-password password --services analytics] docker logs a4957e docker start a4957e =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:50-08:00, sequoiatools/couchbase-cli:7.6:cba710] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.81 --server-add-username Administrator --server-add-password password --services eventing → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.81 --server-add-username Administrator --server-add-password password --services eventing] docker logs cba710 docker start cba710 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:53-08:00, sequoiatools/couchbase-cli:7.6:9fa9c6] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.58 --server-add-username Administrator --server-add-password password --services eventing → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.120.58 --server-add-username Administrator --server-add-password password --services eventing] docker logs 9fa9c6 docker start 9fa9c6 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:56-08:00, sequoiatools/couchbase-cli:7.6:58a6e9] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.33 --server-add-username Administrator --server-add-password password --services backup → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.33 --server-add-username Administrator --server-add-password password --services backup] docker logs 58a6e9 docker start 58a6e9 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:20:59-08:00, sequoiatools/couchbase-cli:7.6:f6aba1] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.31 --server-add-username Administrator --server-add-password password --services index → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.31 --server-add-username Administrator --server-add-password password --services index] docker logs f6aba1 docker start f6aba1 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:03-08:00, sequoiatools/couchbase-cli:7.6:e4b71a] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.32 --server-add-username Administrator --server-add-password password --services index → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.123.32 --server-add-username Administrator --server-add-password password --services index] docker logs e4b71a docker start e4b71a =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:07-08:00, sequoiatools/couchbase-cli:7.6:be9ac4] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.254 --server-add-username Administrator --server-add-password password --services index → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.254 --server-add-username Administrator --server-add-password password --services index] docker logs be9ac4 docker start be9ac4 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:10-08:00, sequoiatools/couchbase-cli:7.6:be5c1b] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.112 --server-add-username Administrator --server-add-password password --services index → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.112 --server-add-username Administrator --server-add-password password --services index] docker logs be5c1b docker start be5c1b =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:14-08:00, sequoiatools/couchbase-cli:7.6:2c0810] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.243 --server-add-username Administrator --server-add-password password --services query → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.96.243 --server-add-username Administrator --server-add-password password --services query] docker logs 2c0810 docker start 2c0810 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:17-08:00, sequoiatools/couchbase-cli:7.6:e73e3c] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.105 --server-add-username Administrator --server-add-password password --services query → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.105 --server-add-username Administrator --server-add-password password --services query] docker logs e73e3c docker start e73e3c =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:21-08:00, sequoiatools/couchbase-cli:7.6:368765] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.110 --server-add-username Administrator --server-add-password password --services fts → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.110 --server-add-username Administrator --server-add-password password --services fts] docker logs 368765 docker start 368765 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:24-08:00, sequoiatools/couchbase-cli:7.6:492c59] server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.148 --server-add-username Administrator --server-add-password password --services fts → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74 -u Administrator -p password --server-add 172.23.97.148 --server-add-username Administrator --server-add-password password --services fts] docker logs 492c59 docker start 492c59 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:27-08:00, sequoiatools/couchbase-cli:7.6:b83a79] server-add -c 172.23.106.136 -u Administrator -p password --server-add 172.23.106.137 --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.106.136 -u Administrator -p password --server-add 172.23.106.137 --server-add-username Administrator --server-add-password password --services data] docker logs b83a79 docker start b83a79 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:21:30-08:00, sequoiatools/couchbase-cli:7.6:f45fe2] rebalance -c 172.23.97.74 -u Administrator -p password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:11-08:00, sequoiatools/couchbase-cli:7.6:b1442b] rebalance -c 172.23.106.136 -u Administrator -p password [pull] appropriate/curl [2023-11-14T08:26:17-08:00, appropriate/curl:e61e79] -s -X POST -u Administrator:password http://172.23.97.74:8091/internalSettings -d magmaMinMemoryQuota=256 [pull] appropriate/curl [2023-11-14T08:26:19-08:00, appropriate/curl:4610dc] -s -X POST -u Administrator:password http://172.23.106.136:8091/internalSettings -d magmaMinMemoryQuota=256 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:21-08:00, sequoiatools/couchbase-cli:7.6:a0b19c] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket default --bucket-ramsize 4189 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --compression-mode active --storage-backend magma --history-retention-bytes 134217728000 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 3 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket default --bucket-ramsize 4189 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --compression-mode active --storage-backend magma --history-retention-bytes 134217728000 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 3] docker logs a0b19c docker start a0b19c XERROR: ramQuota - RAM quota specified is too large to be provisioned into this cluster. 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:24-08:00, sequoiatools/couchbase-cli:7.6:82511d] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket WAREHOUSE --bucket-ramsize 1795 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 268435456000 --history-retention-seconds 43200 --enable-history-retention-by-default 1 --rank 3 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket WAREHOUSE --bucket-ramsize 1795 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 268435456000 --history-retention-seconds 43200 --enable-history-retention-by-default 1 --rank 3] docker logs 82511d docker start 82511d XERROR: ramQuota - RAM quota specified is too large to be provisioned into this cluster. 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:26-08:00, sequoiatools/couchbase-cli:7.6:5ee66f] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket NEW_ORDER --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --max-ttl 10800 --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 3600 --enable-history-retention-by-default 1 --rank 2 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket NEW_ORDER --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --max-ttl 10800 --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 3600 --enable-history-retention-by-default 1 --rank 2] docker logs 5ee66f docker start 5ee66f 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:29-08:00, sequoiatools/couchbase-cli:7.6:b8ef52] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket ITEM --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 53687091200 --history-retention-seconds 7200 --enable-history-retention-by-default 1 --rank 2 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket ITEM --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 53687091200 --history-retention-seconds 7200 --enable-history-retention-by-default 1 --rank 2] docker logs b8ef52 docker start b8ef52 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:32-08:00, sequoiatools/couchbase-cli:7.6:c44ed8] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket4 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 14440 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket4 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 14440 --enable-history-retention-by-default 1 --rank 1] docker logs c44ed8 docker start c44ed8 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:35-08:00, sequoiatools/couchbase-cli:7.6:0daa2e] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket5 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket5 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1] docker logs 0daa2e docker start 0daa2e 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:38-08:00, sequoiatools/couchbase-cli:7.6:e7f375] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket6 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket6 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1] docker logs e7f375 docker start e7f375 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:40-08:00, sequoiatools/couchbase-cli:7.6:b453b9] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket7 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket7 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1] docker logs b453b9 docker start b453b9 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:43-08:00, sequoiatools/couchbase-cli:7.6:a358e3] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket8 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket8 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1] docker logs a358e3 docker start a358e3 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:45-08:00, sequoiatools/couchbase-cli:7.6:e684a4] bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket9 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.97.74 -u Administrator -p password --bucket bucket9 --bucket-ramsize 598 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1] docker logs e684a4 docker start e684a4 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:48-08:00, sequoiatools/couchbase-cli:7.6:38e662] bucket-create -c 172.23.106.136 -u Administrator -p password --bucket remote --bucket-ramsize 11482 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.106.136 -u Administrator -p password --bucket remote --bucket-ramsize 11482 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1] docker logs 38e662 docker start 38e662 XERROR: ramQuota - RAM quota specified is too large to be provisioned into this cluster. 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:51-08:00, sequoiatools/couchbase-cli:7.6:16d869] bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket4 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 14440 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket4 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 14440 --enable-history-retention-by-default 1 --rank 1] docker logs 16d869 docker start 16d869 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:54-08:00, sequoiatools/couchbase-cli:7.6:f9f293] bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket8 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket8 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 --rank 1] docker logs f9f293 docker start f9f293 4ERROR: name - Bucket with given name already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:26:57-08:00, sequoiatools/couchbase-cli:7.6:a087c7] bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket9 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[bucket-create -c 172.23.106.136 -u Administrator -p password --bucket bucket9 --bucket-ramsize 717 --bucket-type couchbase --bucket-replica 1 --enable-flush 1 --wait --bucket-eviction-policy fullEviction --storage-backend magma --history-retention-bytes 2147483648 --history-retention-seconds 86400 --enable-history-retention-by-default 1] docker logs a087c7 docker start a087c7 4ERROR: name - Bucket with given name already exists ########## Cluster config ################## ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### eventing : 2 ===== > [172.23.120.58:8091 172.23.120.81:8091] ########### ###### kv : 11 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.96.48:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### [pull] appropriate/curl [pull] appropriate/curl [2023-11-14T08:27:02-08:00, appropriate/curl:007786] -s -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.74:8092/default/_design/scale -d {"views":{"stats":{"map":"function(doc, meta){ if(doc.profile){ if((doc.rating > 500) && (doc.rating < 520)){ emit(meta.id, doc.ops_sec); }} }", "reduce": "_stats"},"padd":{"map":"function(doc, meta){ if(doc.profile){ if (doc.rating < 200){ emit(meta.id, doc.padding); }} }"},"array":{"map":"function(doc, meta){ if(doc.profile){ if((doc.rating > 200) && (doc.rating< 300)){ emit(doc.active_hosts, null); }} }"}}} [pull] appropriate/curl [2023-11-14T08:27:04-08:00, appropriate/curl:c58d26] -s -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.74:8092/ITEM/_design/all -d {"views":{"all_ids":{"map":"function(doc, meta){ emit(meta.id, null) }"}}} → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/sgw-config [2023-11-14T08:27:06-08:00, sequoiatools/sgw-config:b4470f] MOBILE_TESTKIT_BRANCH=sequoia/sgw-component-testing SSH_USER=root SSH_PWD=couchbase CBS_HOSTS=172.23.97.74,172.23.96.14,172.23.97.241,172.23.96.48,172.23.96.122,172.23.120.73,172.23.121.77,172.23.123.25,172.23.123.26,172.23.120.77,172.23.120.86,172.23.120.74,172.23.120.75,172.23.120.81,172.23.120.58,172.23.123.33,172.23.123.31,172.23.123.32,172.23.96.254,172.23.97.112,172.23.96.243,172.23.97.105,172.23.97.110,172.23.97.148,172.23.97.149,172.23.97.150,172.23.106.134 SGW_HOSTS=172.23.104.254 BUCKET_NAME=bucket7 BUCKET_USER=bucket7 BUCKET_USER_PASSWORD=password → Error occurred on container - sequoiatools/sgw-config:[MOBILE_TESTKIT_BRANCH=sequoia/sgw-component-testing SSH_USER=root SSH_PWD=couchbase CBS_HOSTS=172.23.97.74,172.23.96.14,172.23.97.241,172.23.96.48,172.23.96.122,172.23.120.73,172.23.121.77,172.23.123.25,172.23.123.26,172.23.120.77,172.23.120.86,172.23.120.74,172.23.120.75,172.23.120.81,172.23.120.58,172.23.123.33,172.23.123.31,172.23.123.32,172.23.96.254,172.23.97.112,172.23.96.243,172.23.97.105,172.23.97.110,172.23.97.148,172.23.97.149,172.23.97.150,172.23.106.134 SGW_HOSTS=172.23.104.254 BUCKET_NAME=bucket7 BUCKET_USER=bucket7 BUCKET_USER_PASSWORD=password] docker logs b4470f docker start b4470f 6MOBILE_TESTKIT_BRANCH = sequoia/sgw-component-testing SSH_USER = root SSH_PWD = couchbase CBS_HOSTS = 172.23.97.74,172.23.96.14,172.23.97.241,172.23.96.48,172.23.96.122,172.23.120.73,172.23.121.77,172.23.123.25,172.23.123.26,172.23.120.77,172.23.120.86,172.23.120.74,172.23.120.75,172.23.120.81,172.23.120.58,172.23.123.33,172.23.123.31,172.23.123.32,172.23.96.254,172.23.97.112,172.23.96.243,172.23.97.105,172.23.97.110,172.23.97.148,172.23.97.149,172.23.97.150,172.23.106.134 SGW_HOSTS = 172.23.104.254 BUCKET_NAME = bucket7 BUCKET_USER = bucket7  BUCKET_USER_PASSWORD = password 9Switched to a new branch 'sequoia/sgw-component-testing' nBranch sequoia/sgw-component-testing set up to track remote branch sequoia/sgw-component-testing from origin. (fatal: unable to connect to github.com: :github.com[0: 192.30.255.113]: errno=Connection timed out  (fatal: unable to connect to github.com: :github.com[0: 192.30.255.112]: errno=Connection timed out  =HEAD is now at b3e3828 set number of replica to 1 as default Using Python3 version: 3.6.8 Vvirtualenv 20.13.2 from /usr/local/lib/python3.6/site-packages/virtualenv/__init__.py =created virtual environment CPython3.6.8.final.0-64 in 509ms c creator CPython3Posix(dest=/mobile-testkit/venv, clear=False, no_vcs_ignore=False, global=False)  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv) H added seed packages: pip==21.3.1, setuptools==59.6.0, wheel==0.37.1 n activators BashActivator,CShellActivator,FishActivator,NushellActivator,PowerShellActivator,PythonActivator Collecting ansible==2.7 - Downloading ansible-2.7.0.tar.gz (11.8 MB) ERROR: Exception: #Traceback (most recent call last): ~ File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/cli/base_command.py", line 164, in exc_logging_wrapper  status = run_func(*args) q File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/cli/req_command.py", line 205, in wrapper % return func(self, options, args) n File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/commands/install.py", line 339, in run 8 reqs, check_supported_wheels=not options.target_dir  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/resolver.py", line 93, in resolve H collected.requirements, max_rounds=try_to_avoid_resolution_too_deep t File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 482, in resolve D state = resolution.resolve(requirements, max_rounds=max_rounds) t File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 349, in resolve ? self._add_to_criteria(self.state.criteria, r, parent=None) } File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/resolvers.py", line 173, in _add_to_criteria ! if not criterion.candidates: s File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_vendor/resolvelib/structs.py", line 151, in __bool__  return bool(self._sequence)  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 155, in __bool__  return any(self)  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 143, in H return (c for c in iterator if id(c) not in self._incompatible_ids)  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/found_candidates.py", line 47, in _iter_built  candidate = func()  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/factory.py", line 206, in _make_candidate_from_link  version=version,  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 287, in __init__  version=version,  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 156, in __init__  self.dist = self._prepare()  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 225, in _prepare ( dist = self._prepare_distribution()  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/resolution/resolvelib/candidates.py", line 292, in _prepare_distribution Q return preparer.prepare_linked_requirement(self._ireq, parallel_builds=True)  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 482, in prepare_linked_requirement B return self._prepare_linked_requirement(req, parallel_builds)  File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 528, in _prepare_linked_requirement D link, req.source_dir, self._download, self.download_dir, hashes w File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/operations/prepare.py", line 223, in unpack_url 8 unpack_file(file.path, location, file.content_type) u File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/utils/unpacking.py", line 247, in unpack_file # untar_file(filename, location) t File "/mobile-testkit/venv/lib/python3.6/site-packages/pip/_internal/utils/unpacking.py", line 218, in untar_file % with open(path, "wb") as destfp: kUnicodeEncodeError: 'ascii' codec can't encode character '\xe9' in position 112: ordinal not in range(128)  + set -e + python utilities/sequoia_env_prep.py --ssh-user=root --cbs-hosts=172.23.97.74,172.23.96.14,172.23.97.241,172.23.96.48,172.23.96.122,172.23.120.73,172.23.121.77,172.23.123.25,172.23.123.26,172.23.120.77,172.23.120.86,172.23.120.74,172.23.120.75,172.23.120.81,172.23.120.58,172.23.123.33,172.23.123.31,172.23.123.32,172.23.96.254,172.23.97.112,172.23.96.243,172.23.97.105,172.23.97.110,172.23.97.148,172.23.97.149,172.23.97.150,172.23.106.134 --sgw-hosts=172.23.104.254 + cat ./resources/pool.json + cat ansible.cfg '{"ips": ["172.23.97.74", "172.23.96.14", "172.23.97.241", "172.23.96.48", "172.23.96.122", "172.23.120.73", "172.23.121.77", "172.23.123.25", "172.23.123.26", "172.23.120.77", "172.23.120.86", "172.23.120.74", "172.23.120.75", "172.23.120.81", "172.23.120.58", "172.23.123.33", "172.23.123.31", "172.23.123.32", "172.23.96.254", "172.23.97.112", "172.23.96.243", "172.23.97.105", "172.23.97.110", "172.23.97.148", "172.23.97.149", "172.23.97.150", "172.23.106.134", "172.23.104.254"], "ip_to_node_type": {"172.23.97.74": "couchbase_servers", "172.23.96.14": "couchbase_servers", "172.23.97.241": "couchbase_servers", "172.23.96.48": "couchbase_servers", "172.23.96.122": "couchbase_servers", "172.23.120.73": "couchbase_servers", "172.23.121.77": "couchbase_servers", "172.23.123.25": "couchbase_servers", "172.23.123.26": "couchbase_servers", "172.23.120.77": "couchbase_servers", "172.23.120.86": "couchbase_servers", "172.23.120.74": "couchbase_servers", "172.23.120.75": "couchbase_servers", "172.23.120.81": "couchbase_servers", "172.23.120.58": "couchbase_servers", "172.23.123.33": "couchbase_servers", "172.23.123.31": "couchbase_servers", "172.23.123.32": "couchbase_servers", "172.23.96.254": "couchbase_servers", "172.23.97.112": "couchbase_servers", "172.23.96.243": "couchbase_servers", "172.23.97.105": "couchbase_servers", "172.23.97.110": "couchbase_servers", "172.23.97.148": "couchbase_servers", "172.23.97.149": "couchbase_servers", "172.23.97.150": "couchbase_servers", "172.23.106.134": "couchbase_servers", "172.23.104.254": "sync_gateways"}}[defaults] remote_user = root host_key_checking = False <+ python libraries/utilities/generate_clusters_from_pool.py ^WARNING:root:WARNING: Skipping config base_di since 1 sg_accels required, but only 0 provided FWARNING:root:WARNING: Removing the partially generated config base_di \WARNING:root:WARNING: Skipping config ci_di since 2 sg_accels required, but only 0 provided DWARNING:root:WARNING: Removing the partially generated config ci_di eWARNING:root:WARNING: Skipping config base_lb_cc since 3 sync_gateways required, but only 1 provided IWARNING:root:WARNING: Removing the partially generated config base_lb_cc eWARNING:root:WARNING: Skipping config base_lb_di since 3 sync_gateways required, but only 1 provided IWARNING:root:WARNING: Removing the partially generated config base_lb_di cWARNING:root:WARNING: Skipping config ci_lb_cc since 3 sync_gateways required, but only 1 provided GWARNING:root:WARNING: Removing the partially generated config ci_lb_cc :Using the following machines to run functional tests ... ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254'] ${'172.23.97.74': 'couchbase_servers', '172.23.96.14': 'couchbase_servers', '172.23.97.241': 'couchbase_servers', '172.23.96.48': 'couchbase_servers', '172.23.96.122': 'couchbase_servers', '172.23.120.73': 'couchbase_servers', '172.23.121.77': 'couchbase_servers', '172.23.123.25': 'couchbase_servers', '172.23.123.26': 'couchbase_servers', '172.23.120.77': 'couchbase_servers', '172.23.120.86': 'couchbase_servers', '172.23.120.74': 'couchbase_servers', '172.23.120.75': 'couchbase_servers', '172.23.120.81': 'couchbase_servers', '172.23.120.58': 'couchbase_servers', '172.23.123.33': 'couchbase_servers', '172.23.123.31': 'couchbase_servers', '172.23.123.32': 'couchbase_servers', '172.23.96.254': 'couchbase_servers', '172.23.97.112': 'couchbase_servers', '172.23.96.243': 'couchbase_servers', '172.23.97.105': 'couchbase_servers', '172.23.97.110': 'couchbase_servers', '172.23.97.148': 'couchbase_servers', '172.23.97.149': 'couchbase_servers', '172.23.97.150': 'couchbase_servers', '172.23.106.134': 'couchbase_servers', '172.23.104.254': 'sync_gateways'} =Generating 'resources/cluster_configs/'. Using docker: False ips: ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254']  Generating config: base_cc REMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254'] webhook ip: 172.17.0.2 Generating base_cc.json ips: ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254']  Generating config: base_di REMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254'] QWARNING: Skipping config base_di since 1 sg_accels required, but only 0 provided 9WARNING: Removing the partially generated config base_di ips: ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254']  Generating config: ci_cc REMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254'] webhook ip: 172.17.0.2 Generating ci_cc.json ips: ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254']  Generating config: ci_di REMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254'] OWARNING: Skipping config ci_di since 2 sg_accels required, but only 0 provided 7WARNING: Removing the partially generated config ci_di ips: ['172.23.97.74', '172.23.96.14', '172.23.97.241', '172.23.96.48', '172.23.96.122', '172.23.120.73', '172.23.121.77', '172.23.123.25', '172.23.123.26', '172.23.120.77', '172.23.120.86', '172.23.120.74', '172.23.120.75', '172.23.120.81', '172.23.120.58', '172.23.123.33', '172.23.123.31', '172.23.123.32', '172.23.96.254', '172.23.97.112', '172.23.96.243', '172.23.97.105', '172.23.97.110', '172.23.97.148', '172.23.97.149', '172.23.97.150', '172.23.106.134', '172.23.104.254']  Generating config: base_lb_cc XWARNING: Skipping config base_lb_cc since 3 sync_gateways required, but only 1 provided WARNING: Removing the partially generated config 1sg_2ac_3cbs |+ python libraries/utilities/install_keys.py '--public-key-path=~/.ssh/id_rsa.pub' --ssh-user=root --ssh-password=couchbase #Traceback (most recent call last): B File "libraries/utilities/install_keys.py", line 6, in  import paramiko 0ModuleNotFoundError: No module named 'paramiko' Test cycle started: 1 → parsed tests/templates/rebalance.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/kv.yml → parsed tests/templates/fts.yml → parsed tests/templates/n1ql.yml → parsed tests/templates/multinode_failure.yml → parsed tests/templates/collections.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:31:30-08:00, sequoiatools/couchbase-cli:7.6:52c960] setting-compaction -c 172.23.97.74 -u Administrator -p password --metadata-purge-interval .04 --compaction-db-percentage 30 --compaction-view-percentage 30 [pull] appropriate/curl [2023-11-14T08:31:51-08:00, appropriate/curl:99ea26] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.123.31:9102/settings -d {"indexer.settings.enable_shard_affinity":true} [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:31:56-08:00, sequoiatools/couchbase-cli:7.6:5c62be] setting-autofailover -c 172.23.97.74:8091 -u Administrator -p password --enable-auto-failover=0 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:32:03-08:00, sequoiatools/couchbase-cli:7.6:2df561] node-to-node-encryption -c 172.23.97.74:8091 -u Administrator -p password --enable [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:32:38-08:00, sequoiatools/couchbase-cli:7.6:6ccee0] setting-security -c 172.23.97.74:8091 -u Administrator -p password --set --cluster-encryption-level control [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:32:45-08:00, sequoiatools/couchbase-cli:7.6:0e25de] ip-family -c 172.23.97.74:8091 -u Administrator -p password --set --ipv4only [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:33:20-08:00, sequoiatools/couchbase-cli:7.6:0b61b2] setting-autofailover -c 172.23.97.74:8091 -u Administrator -p password --enable-auto-failover=1 --auto-failover-timeout=1 --max-failovers=3 [pull] appropriate/curl [2023-11-14T08:33:29-08:00, appropriate/curl:2e9b64] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.123.31:9102/settings -d {"indexer.plasma.backIndex.enablePageBloomFilter":true} [pull] appropriate/curl [2023-11-14T08:33:36-08:00, appropriate/curl:9fa1de] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.123.31:9102/settings -d {"indexer.build.enableOSO":true} [pull] appropriate/curl [2023-11-14T08:33:42-08:00, appropriate/curl:002960] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.123.31:9102/settings -d {"indexer.settings.rebalance.redistribute_indexes":true} [pull] appropriate/curl [2023-11-14T08:33:48-08:00, appropriate/curl:7a870c] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.110:8094/api/managerOptions -d {"bleveMaxResultWindow":"100000"} [pull] appropriate/curl [2023-11-14T08:33:54-08:00, appropriate/curl:6b3d39] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.110:8094/api/managerOptions -d {"bleveMaxClauseCount":"2500"} [pull] appropriate/curl [2023-11-14T08:33:58-08:00, appropriate/curl:608cd2] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.97.74:8091/_p/backup/api/v1/plan/my_plan -d {"name":"my_plan","description":"This plan does backups every 2 days","services":["data","gsi","views","ft","eventing","cbas","query"],"default":false,"tasks":[{"name":"backup-1","task_type":"BACKUP","schedule":{"job_type":"BACKUP","frequency":24,"period":"HOURS","start_now":false},"full_backup":true},{"name":"merge","task_type":"MERGE","schedule":{"job_type":"MERGE","frequency":2,"period":"DAYS","time":"12:00","start_now":false},"merge_options":{"offset_start":0,"offset_end":2},"full_backup":true}]} [pull] appropriate/curl [2023-11-14T08:34:06-08:00, appropriate/curl:65f0b8] -u Administrator:password -X POST http://172.23.97.74:8091/_p/backup/api/v1/cluster/self/repository/active/my_repo -H Content-Type:application/json -d {"plan": "my_plan", "archive": "/data/archive", "bucket_name":"bucket5"} [pull] sequoiatools/collections:1.0 [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:17-08:00, sequoiatools/collections:1.0:086ae0] -i 172.23.97.74:8091 -b bucket4 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:25-08:00, sequoiatools/collections:1.0:4e6689] -i 172.23.97.74:8091 -b bucket5 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:33-08:00, sequoiatools/collections:1.0:19230e] -i 172.23.97.74:8091 -b bucket6 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:41-08:00, sequoiatools/collections:1.0:0770f4] -i 172.23.97.74:8091 -b bucket7 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:48-08:00, sequoiatools/collections:1.0:ea641c] -i 172.23.97.74:8091 -b bucket8 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=1 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:34:56-08:00, sequoiatools/collections:1.0:b3d643] -i 172.23.97.74:8091 -b bucket9 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=1 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:35:04-08:00, sequoiatools/collections:1.0:87ca3c] -i 172.23.106.136:8091 -b bucket4 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:35:12-08:00, sequoiatools/collections:1.0:55c804] -i 172.23.106.136:8091 -b bucket8 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T08:35:20-08:00, sequoiatools/collections:1.0:314c0d] -i 172.23.106.136:8091 -b bucket9 -o create_multi_scope_collection -s scope_ -c coll_ --scope_count=2 --collection_count=10 --collection_distribution=uniform [pull] sequoiatools/cmd [pull] sequoiatools/cmd [2023-11-14T08:35:30-08:00, sequoiatools/cmd:7dad56] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:40:37-08:00, sequoiatools/couchbase-cli:7.6:87165a] xdcr-setup -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-hostname 172.23.106.136 --xdcr-username Administrator --xdcr-password password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[xdcr-setup -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-hostname 172.23.106.136 --xdcr-username Administrator --xdcr-password password] docker logs 87165a docker start 87165a 8ERROR: {'_': 'Duplicate cluster names are not allowed'} [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:40:45-08:00, sequoiatools/couchbase-cli:7.6:899dd0] xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket default --xdcr-to-bucket remote --enable-compression 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket default --xdcr-to-bucket remote --enable-compression 1] docker logs 899dd0 docker start 899dd0 HERROR: Replication to the same remote cluster and bucket already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:40:53-08:00, sequoiatools/couchbase-cli:7.6:ad2285] xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket4 --xdcr-to-bucket bucket4 --enable-compression 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket4 --xdcr-to-bucket bucket4 --enable-compression 1] docker logs ad2285 docker start ad2285 HERROR: Replication to the same remote cluster and bucket already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:41:01-08:00, sequoiatools/couchbase-cli:7.6:032dab] xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket8 --xdcr-to-bucket bucket8 --enable-compression 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket8 --xdcr-to-bucket bucket8 --enable-compression 1] docker logs 032dab docker start 032dab HERROR: Replication to the same remote cluster and bucket already exists [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:41:09-08:00, sequoiatools/couchbase-cli:7.6:253755] xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket9 --xdcr-to-bucket bucket9 --enable-compression 1 → Error occurred on container - sequoiatools/couchbase-cli:7.6:[xdcr-replicate -c 172.23.97.74:8091 --create --xdcr-cluster-name remote --xdcr-from-bucket bucket9 --xdcr-to-bucket bucket9 --enable-compression 1] docker logs 253755 docker start 253755 HERROR: Replication to the same remote cluster and bucket already exists [pull] sequoiatools/catapult_dgm [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:18-08:00, sequoiatools/catapult_dgm:fc0190] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket4 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:24-08:00, sequoiatools/catapult_dgm:ef9c18] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket5 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:28-08:00, sequoiatools/catapult_dgm:af6fb6] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket6 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:33-08:00, sequoiatools/catapult_dgm:2b744d] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket7 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:38-08:00, sequoiatools/catapult_dgm:4dd0de] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket8 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/catapult_dgm [2023-11-14T08:41:43-08:00, sequoiatools/catapult_dgm:62140f] -i 172.23.97.74 -r 80 -u Administrator -p password -b bucket9 -n 2000 -pc 100 -dt Hotel -ds 1000 -ac True [pull] sequoiatools/transactions [pull] sequoiatools/transactions [2023-11-14T08:41:50-08:00, sequoiatools/transactions:57fea7] 172.23.97.74 default 1000 [pull] sequoiatools/collections:1.0 [2023-11-14T08:41:55-08:00, sequoiatools/collections:1.0:2383df] -i 172.23.97.74:8091 -b bucket8 -o crud_mode --crud_interval=120 --max_scopes=10 --max_collections=100 [pull] sequoiatools/collections:1.0 [2023-11-14T08:42:00-08:00, sequoiatools/collections:1.0:1f0caf] -i 172.23.97.74:8091 -b bucket9 -o crud_mode --crud_interval=120 --max_scopes=10 --max_collections=100 [pull] sequoiatools/pillowfight:7.0 [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:07-08:00, sequoiatools/pillowfight:7.0:87d3d1] -U couchbase://172.23.97.74/default?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:12-08:00, sequoiatools/pillowfight:7.0:5626a3] -U couchbase://172.23.97.74/WAREHOUSE?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:17-08:00, sequoiatools/pillowfight:7.0:7f787a] -U couchbase://172.23.97.74/NEW_ORDER?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:22-08:00, sequoiatools/pillowfight:7.0:5df5a5] -U couchbase://172.23.97.74/ITEM?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:27-08:00, sequoiatools/pillowfight:7.0:ec0e12] -U couchbase://172.23.97.74/bucket4?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:32-08:00, sequoiatools/pillowfight:7.0:970fc5] -U couchbase://172.23.97.74/bucket5?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:37-08:00, sequoiatools/pillowfight:7.0:46b539] -U couchbase://172.23.97.74/bucket6?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:42-08:00, sequoiatools/pillowfight:7.0:23ab69] -U couchbase://172.23.97.74/bucket7?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:47-08:00, sequoiatools/pillowfight:7.0:1012e0] -U couchbase://172.23.97.74/bucket8?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T08:42:52-08:00, sequoiatools/pillowfight:7.0:c2bb06] -U couchbase://172.23.97.74/bucket9?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/cmd [2023-11-14T08:42:57-08:00, sequoiatools/cmd:27912d] 600 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T08:53:28-08:00, sequoiatools/couchbase-cli:7.6:68aeab] rebalance -c 172.23.97.74:8091 --server-remove 172.23.96.14:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T08:59:29-08:00, sequoiatools/cmd:a200d5] 60 [pull] sequoiatools/cmd [2023-11-14T09:00:37-08:00, sequoiatools/cmd:092edc] 600 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/collections:1.0 [2023-11-14T09:10:46-08:00, sequoiatools/collections:1.0:1b0628] -i 172.23.97.74:8091 -b default -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T09:10:54-08:00, sequoiatools/collections:1.0:900a2a] -i 172.23.97.74:8091 -b WAREHOUSE -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T09:11:02-08:00, sequoiatools/collections:1.0:2ee2b2] -i 172.23.97.74:8091 -b NEW_ORDER -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T09:11:10-08:00, sequoiatools/collections:1.0:0e3945] -i 172.23.97.74:8091 -b ITEM -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/eventing:7.0 [pull] sequoiatools/eventing:7.0 [2023-11-14T09:11:33-08:00, sequoiatools/eventing:7.0:60d857] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers [pull] sequoiatools/eventing:7.0 [2023-11-14T09:11:41-08:00, sequoiatools/eventing:7.0:6e1e47] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql [pull] sequoiatools/eventing:7.0 [2023-11-14T09:11:49-08:00, sequoiatools/eventing:7.0:729b6c] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm [pull] sequoiatools/eventing:7.0 [2023-11-14T09:11:56-08:00, sequoiatools/eventing:7.0:ea6cc8] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl [pull] sequoiatools/eventing:7.0 [2023-11-14T09:12:04-08:00, sequoiatools/eventing:7.0:e70421] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -o deploy → Error occurred on container - sequoiatools/eventing:7.0:[eventing_helper.py -i 172.23.120.81 -u Administrator -p password -o deploy] docker logs e70421 docker start e70421 H{'host': '172.23.120.81', 'username': 'Administrator', 'password': 'password', 'source': '_default', 'metadata': None, 'bindings': None, 'type': None, 'name': None, 'number': 1, 'operation': 'deploy', 'wait': False, 'state': None, 'log_level': 'INFO', 'timeout': 1200, 'sleep': 60, 'sbm': False, 'capella': False, 'tls': False} b'{\n "functions": [\n "timers_0",\n "n1ql_0",\n "curl_0",\n "sbm_0"\n ]\n}' {'content-type': 'application/json', 'status': '200', 'date': 'Tue, 14 Nov 2023 17:12:04 GMT', 'content-length': '70', 'content-location': 'http://172.23.120.81:8096/api/v1/list/functions'} *['timers_0', 'n1ql_0', 'curl_0', 'sbm_0'] deploying : timers_0 b'{\n "name": "ERR_APP_ALREADY_DEPLOYED",\n "code": 20,\n "description": "Invalid operation. Function: timers_0 already in deployed state."\n}' {'content-type': 'application/json', 'status': '422', 'date': 'Tue, 14 Nov 2023 17:12:04 GMT', 'content-length': '136'} #Traceback (most recent call last): 3 File "eventing_helper.py", line 413, in  EventingHelper().run() - File "eventing_helper.py", line 84, in run " self.deploy_handlers(options) : File "eventing_helper.py", line 116, in deploy_handlers 7 self.perform_lifecycle_operation(handler,"deploy") F File "eventing_helper.py", line 225, in perform_lifecycle_operation  raise e F File "eventing_helper.py", line 223, in perform_lifecycle_operation  raise Exception(content) Exception: b'{\n "name": "ERR_APP_ALREADY_DEPLOYED",\n "code": 20,\n "description": "Invalid operation. Function: timers_0 already in deployed state."\n}' [pull] sequoiatools/eventing:7.0 [2023-11-14T09:12:09-08:00, sequoiatools/eventing:7.0:6386df] eventing_helper.py -i 172.23.120.81 -u Administrator -p password -o wait_for_state --state deployed ########## Cluster config ################## ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### eventing : 2 ===== > [172.23.120.58:8091 172.23.120.81:8091] ########### ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.48:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### Test cycle: 1 ended after 111 seconds → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/analyticsmanager:1.0 [pull] sequoiatools/analyticsmanager:1.0 [2023-11-14T09:12:42-08:00, sequoiatools/analyticsmanager:1.0:737d67] -i 172.23.120.74 -b bucket4,bucket5,bucket6,bucket7 -o create_cbas_infra --dv_cnt 4 --ds_cnt 10 --idx_cnt 4 --data_src catapult --syn_cnt 10 -w false --ingestion_timeout 3600 --ds_without_where 2 --api_timeout 3600 [pull] sequoiatools/analyticsmanager:1.0 [2023-11-14T09:13:17-08:00, sequoiatools/analyticsmanager:1.0:d863f2] -i 172.23.120.74 -b default,WAREHOUSE -o create_cbas_infra --exc_coll _default --dv_cnt 4 --ds_cnt 10 --idx_cnt 4 --data_src gideon --syn_cnt 10 -w false --ingestion_timeout 3600 --ds_without_where 2 --api_timeout 3600 [pull] sequoiatools/cmd [2023-11-14T09:14:01-08:00, sequoiatools/cmd:ec2090] 60 ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.48:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### eventing : 2 ===== > [172.23.120.58:8091 172.23.120.81:8091] ########### Test cycle: 1 ended after 150 seconds [pull] sequoiatools/indexmanager [pull] sequoiatools/indexmanager [2023-11-14T09:15:11-08:00, sequoiatools/indexmanager:20c1de] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket4 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T09:15:41-08:00, sequoiatools/indexmanager:7f3e2a] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T09:16:13-08:00, sequoiatools/indexmanager:b94f56] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T09:16:51-08:00, sequoiatools/indexmanager:6d497e] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T09:17:20-08:00, sequoiatools/indexmanager:27be7a] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T09:18:49-08:00, sequoiatools/indexmanager:c29686] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T09:19:38-08:00, sequoiatools/indexmanager:81ffbf] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T09:20:55-08:00, sequoiatools/indexmanager:875825] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a build_deferred_index -m 2 [pull] sequoiatools/indexmanager [2023-11-14T09:22:01-08:00, sequoiatools/indexmanager:644ec6] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a build_deferred_index -m 2 [pull] sequoiatools/indexmanager [2023-11-14T09:22:51-08:00, sequoiatools/indexmanager:4cd47d] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a build_deferred_index -m 2 [pull] sequoiatools/wait_for_idx_build_complete [pull] sequoiatools/wait_for_idx_build_complete [2023-11-14T09:23:45-08:00, sequoiatools/wait_for_idx_build_complete:e42b32] 172.23.123.31 Administrator password [pull] sequoiatools/ftsindexmanager [pull] sequoiatools/ftsindexmanager [2023-11-14T09:24:55-08:00, sequoiatools/ftsindexmanager:ed81ba] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket4 -m 1:1:2 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T09:25:04-08:00, sequoiatools/cmd:22e65d] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T09:30:16-08:00, sequoiatools/ftsindexmanager:48e606] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket5 -m 1:0:5 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T09:30:24-08:00, sequoiatools/cmd:b862f3] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T09:35:37-08:00, sequoiatools/ftsindexmanager:75f42c] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket6 -m 1:1:1,1:1:2 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T09:35:45-08:00, sequoiatools/cmd:305b81] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T09:40:57-08:00, sequoiatools/ftsindexmanager:a83db9] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket7 -m 1:0:2 -s 1 -a create_index_from_map [pull] sequoiatools/sgw [pull] sequoiatools/sgw [2023-11-14T09:41:07-08:00, sequoiatools/sgw:371f8b] CBS_HOSTS=172.23.97.74 SGW_HOSTS=172.23.104.254 SSH_USER=root SSH_PWD=couchbase [pull] sequoiatools/gideon2 [pull] sequoiatools/gideon2 [2023-11-14T09:41:14-08:00, sequoiatools/gideon2:9bd792] kv --ops 150 --create 80 --delete 20 --get 82 --sizes 64 96 --expire 100 --ttl 3600 --hosts 172.23.97.74 --bucket default --scope event_0 --collection coll0 [pull] sequoiatools/gideon2 [2023-11-14T09:41:20-08:00, sequoiatools/gideon2:d1ed83] kv --ops 150 --create 80 --delete 20 --get 82 --sizes 64 96 --expire 100 --ttl 3600 --hosts 172.23.97.74 --bucket WAREHOUSE --scope event_0 --collection coll0 [pull] sequoiatools/catapult [pull] sequoiatools/catapult [2023-11-14T09:41:26-08:00, sequoiatools/catapult:dc37ac] -i 172.23.97.74 -u Administrator -p password -b bucket4 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T09:41:31-08:00, sequoiatools/catapult:9babc3] -i 172.23.97.74 -u Administrator -p password -b bucket5 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T09:41:36-08:00, sequoiatools/catapult:c954ed] -i 172.23.97.74 -u Administrator -p password -b bucket6 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T09:41:42-08:00, sequoiatools/catapult:feb91b] -i 172.23.97.74 -u Administrator -p password -b bucket7 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/queryapp [pull] sequoiatools/queryapp [2023-11-14T09:41:50-08:00, sequoiatools/queryapp:f1d4f6] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket4 --querycount 1 --threads 1 --n1ql True --query_timeout=600 --scan_consistency NOT_BOUNDED --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T09:41:55-08:00, sequoiatools/queryapp:a52431] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket5 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T09:42:00-08:00, sequoiatools/queryapp:d1216d] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket6 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T09:42:05-08:00, sequoiatools/queryapp:a4a752] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket7 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --txns True --dataset hotel [pull] sequoiatools/ftsindexmanager [2023-11-14T09:42:10-08:00, sequoiatools/ftsindexmanager:005f0d] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket4 --print_interval 600 -a run_queries -t 0 -nq 1 [pull] sequoiatools/ftsindexmanager [2023-11-14T09:42:15-08:00, sequoiatools/ftsindexmanager:5da51c] -n 172.23.97.110 -o 8091 -u Administrator -p password -b bucket5 --print_interval 600 -a run_flex_queries -t 0 -nq 1 [pull] sequoiatools/cmd [2023-11-14T09:42:20-08:00, sequoiatools/cmd:db06a6] 600 → Error occurred on container - sequoiatools/sgw:[CBS_HOSTS=172.23.97.74 SGW_HOSTS=172.23.104.254 SSH_USER=root SSH_PWD=couchbase] docker logs 371f8b docker start 371f8b 6MOBILE_TESTKIT_BRANCH = sequoia/sgw-component-testing CBS_HOSTS = 172.23.97.74 &COUCHBASE_SERVER_VERSION = 7.0.0-4291 SGW_HOSTS = 172.23.104.254 !SYNC_GATEWAY_VERSION = 2.8.0-374 COLLECT_LOGS = false SERVER_SEED_DOCS = 100000 MAX_DOCS=1200 NUM_USERS = 12 CREATE_BATCH_SIZE = 100 CREATE_DELAY = 0.1 UPDATE_BATCH_SIZE = 3 UPDATE_DOCS_PERCENTAGE = 0.1 UPDATE_DELAY = 1 CHANGES_DELAY = 10 CHANGES_LIMIT = 200 SSH_USER = root SSH_PWD = couchbase UP_TIME = 86400 9Switched to a new branch 'sequoia/sgw-component-testing' nBranch sequoia/sgw-component-testing set up to track remote branch sequoia/sgw-component-testing from origin. (fatal: unable to connect to github.com: :github.com[0: 192.30.255.113]: errno=Connection timed out  (fatal: unable to connect to github.com: :github.com[0: 192.30.255.113]: errno=Connection timed out  7HEAD is now at c46c98c add stop and start sync gateway Using Python3 version: 3.6.8 Uvirtualenv 20.2.2 from /usr/local/lib/python3.6/site-packages/virtualenv/__init__.py =created virtual environment CPython3.6.8.final.0-64 in 509ms c creator CPython3Posix(dest=/mobile-testkit/venv, clear=False, no_vcs_ignore=False, global=False)  seeder FromAppData(download=False, pip=bundle, setuptools=bundle, wheel=bundle, via=copy, app_data_dir=/root/.local/share/virtualenv) H added seed packages: pip==20.3.1, setuptools==51.0.0, wheel==0.36.1 l activators BashActivator,CShellActivator,FishActivator,PowerShellActivator,PythonActivator,XonshActivator Collecting ansible==2.7 - Downloading ansible-2.7.0.tar.gz (11.8 MB) Requirement already satisfied: setuptools in ./venv/lib/python3.6/site-packages (from ansible==2.7->-r requirements.txt (line 1)) (51.0.0) Collecting appdirs==1.4.3 9 Downloading appdirs-1.4.3-py2.py3-none-any.whl (12 kB) Collecting asn1crypto==0.22.0 = Downloading asn1crypto-0.22.0-py2.py3-none-any.whl (97 kB) Collecting astroid==1.4.8 : Downloading astroid-1.4.8-py2.py3-none-any.whl (213 kB) Collecting awscli==1.10.14 ; Downloading awscli-1.10.14-py2.py3-none-any.whl (917 kB) .Collecting backports.functools-lru-cache==1.3 N Downloading backports.functools_lru_cache-1.3-py2.py3-none-any.whl (6.2 kB) 1Collecting backports.ssl-match-hostname==3.5.0.1 C Downloading backports.ssl_match_hostname-3.5.0.1.tar.gz (5.6 kB) Collecting boto3==1.3 8 Downloading boto3-1.3.0-py2.py3-none-any.whl (112 kB) Collecting botocore==1.4.5 ; Downloading botocore-1.4.5-py2.py3-none-any.whl (2.2 MB) Collecting cffi==1.10.0 D Downloading cffi-1.10.0-cp36-cp36m-manylinux1_x86_64.whl (406 kB) Collecting colorama==0.3.3 , Downloading colorama-0.3.3.tar.gz (22 kB) Collecting configparser==3.5.0 0 Downloading configparser-3.5.0.tar.gz (39 kB) Collecting couchbase==2.2.6 . Downloading couchbase-2.2.6.tar.gz (513 kB) Collecting cryptography==2.8 K Downloading cryptography-2.8-cp34-abi3-manylinux2010_x86_64.whl (2.3 MB) Collecting docker==2.2.1 9 Downloading docker-2.2.1-py2.py3-none-any.whl (107 kB) !Collecting docker-pycreds==0.2.1 A Downloading docker_pycreds-0.2.1-py2.py3-none-any.whl (4.5 kB) Collecting docutils==0.12 6 Downloading docutils-0.12-py3-none-any.whl (508 kB) Collecting ecdsa==0.13 6 Downloading ecdsa-0.13-py2.py3-none-any.whl (86 kB) Collecting enum34==1.1.6 4 Downloading enum34-1.1.6-py3-none-any.whl (12 kB) Collecting flake8==3.3.0 8 Downloading flake8-3.3.0-py2.py3-none-any.whl (66 kB) Collecting futures==3.0.5 + Downloading futures-3.0.5.tar.gz (25 kB) Collecting idna==2.5 4 Downloading idna-2.5-py2.py3-none-any.whl (55 kB) Collecting ipaddress==1.0.18 . Downloading ipaddress-1.0.18.tar.gz (32 kB) Collecting isort==4.2.5 7 Downloading isort-4.2.5-py2.py3-none-any.whl (40 kB) Collecting Jinja2==2.9.5 9 Downloading Jinja2-2.9.5-py2.py3-none-any.whl (340 kB) Collecting jmespath==0.9.0 : Downloading jmespath-0.9.0-py2.py3-none-any.whl (20 kB) $Collecting lazy-object-proxy==1.2.2 5 Downloading lazy-object-proxy-1.2.2.tar.gz (31 kB) Collecting MarkupSafe==1.1.1 K Downloading MarkupSafe-1.1.1-cp36-cp36m-manylinux2010_x86_64.whl (32 kB) Collecting mccabe==0.6.1 9 Downloading mccabe-0.6.1-py2.py3-none-any.whl (8.6 kB) Collecting netifaces==0.10.4 . Downloading netifaces-0.10.4.tar.gz (22 kB) Collecting numpy==1.17.4 F Downloading numpy-1.17.4-cp36-cp36m-manylinux1_x86_64.whl (20.0 MB) Collecting olefile==0.44 ' Downloading olefile-0.44.zip (74 kB) Collecting packaging==16.8 : Downloading packaging-16.8-py2.py3-none-any.whl (23 kB) Collecting paramiko==2.1.5 ; Downloading paramiko-2.1.5-py2.py3-none-any.whl (185 kB) Collecting Pillow==6.2.0 E Downloading Pillow-6.2.0-cp36-cp36m-manylinux1_x86_64.whl (2.1 MB) Collecting py==1.5.1 4 Downloading py-1.5.1-py2.py3-none-any.whl (88 kB) Collecting pyasn1==0.2.3 8 Downloading pyasn1-0.2.3-py2.py3-none-any.whl (53 kB) Collecting pycodestyle==2.3.1 = Downloading pycodestyle-2.3.1-py2.py3-none-any.whl (45 kB) Collecting pycparser==2.17 - Downloading pycparser-2.17.tar.gz (231 kB) Collecting pycrypto==2.6.1 - Downloading pycrypto-2.6.1.tar.gz (446 kB) Collecting pyflakes==1.5.0 ; Downloading pyflakes-1.5.0-py2.py3-none-any.whl (225 kB) Collecting PyJWT==1.4.0 7 Downloading PyJWT-1.4.0-py2.py3-none-any.whl (23 kB) Collecting pylint==1.6.4 9 Downloading pylint-1.6.4-py2.py3-none-any.whl (569 kB) Collecting pyparsing==2.2.0 ; Downloading pyparsing-2.2.0-py2.py3-none-any.whl (56 kB) Collecting pytest==4.6.9 9 Downloading pytest-4.6.9-py2.py3-none-any.whl (231 kB) Collecting pytest-html==1.10.0 > Downloading pytest_html-1.10.0-py2.py3-none-any.whl (14 kB) %Collecting pytest-rerunfailures==8.0 D Downloading pytest_rerunfailures-8.0-py2.py3-none-any.whl (11 kB) Requirement already satisfied: setuptools in ./venv/lib/python3.6/site-packages (from ansible==2.7->-r requirements.txt (line 1)) (51.0.0) !Collecting pytest-timeout==1.0.0 @ Downloading pytest_timeout-1.0.0-py2.py3-none-any.whl (11 kB) "Collecting python-dateutil==2.5.1 B Downloading python_dateutil-2.5.1-py2.py3-none-any.whl (200 kB) Collecting python-ntlm3==1.0.2 > Downloading python_ntlm3-1.0.2-py2.py3-none-any.whl (17 kB) "Collecting python-vagrant==0.5.14 3 Downloading python-vagrant-0.5.14.tar.gz (28 kB) Collecting pywinrm==0.2.1 9 Downloading pywinrm-0.2.1-py2.py3-none-any.whl (24 kB) Collecting PyYAML==3.12 ' Downloading PyYAML-3.12.zip (375 kB) Collecting requests==2.19.0 ; Downloading requests-2.19.0-py2.py3-none-any.whl (91 kB)  Collecting requests-ntlm==0.3.0 @ Downloading requests_ntlm-0.3.0-py2.py3-none-any.whl (4.4 kB) Collecting rsa==3.3 3 Downloading rsa-3.3-py2.py3-none-any.whl (44 kB) Collecting s3transfer==0.0.1 < Downloading s3transfer-0.0.1-py2.py3-none-any.whl (18 kB) Collecting six==1.10.0 6 Downloading six-1.10.0-py2.py3-none-any.whl (10 kB) Collecting termcolor==1.1.0 . Downloading termcolor-1.1.0.tar.gz (3.9 kB) Collecting troposphere==1.5.0 / Downloading troposphere-1.5.0.tar.gz (64 kB) $Collecting websocket-client==0.40.0 6 Downloading websocket_client-0.40.0.tar.gz (196 kB) Collecting wrapt==1.10.8 * Downloading wrapt-1.10.8.tar.gz (25 kB) Collecting xmltodict==0.10.2 . Downloading xmltodict-0.10.2.tar.gz (24 kB) Collecting atomicwrites>=1.0 0 Downloading atomicwrites-1.4.1.tar.gz (14 kB) Collecting attrs>=17.4.0 4 Downloading attrs-22.2.0-py3-none-any.whl (60 kB) Collecting certifi>=2017.4.17 : Downloading certifi-2023.7.22-py3-none-any.whl (158 kB) !Collecting chardet<3.1.0,>=3.0.2 : Downloading chardet-3.0.4-py2.py3-none-any.whl (133 kB) $Collecting importlib-metadata>=0.12 @ Downloading importlib_metadata-4.8.3-py3-none-any.whl (17 kB) !Collecting more-itertools>=4.0.0 = Downloading more_itertools-8.14.0-py3-none-any.whl (52 kB) Collecting pluggy<1.0,>=0.12 9 Downloading pluggy-0.13.1-py2.py3-none-any.whl (18 kB) $Collecting typing-extensions>=3.6.4 ? Downloading typing_extensions-4.1.1-py3-none-any.whl (26 kB) !Collecting urllib3<1.24,>=1.21.1 9 Downloading urllib3-1.23-py2.py3-none-any.whl (133 kB) Collecting wcwidth ; Downloading wcwidth-0.2.10-py2.py3-none-any.whl (105 kB) Collecting zipp>=0.5 3 Downloading zipp-3.6.0-py3-none-any.whl (5.3 kB) #Building wheels for collected packages: ansible, backports.ssl-match-hostname, colorama, configparser, couchbase, futures, ipaddress, lazy-object-proxy, netifaces, olefile, pycparser, pycrypto, python-vagrant, PyYAML, termcolor, troposphere, websocket-client, wrapt, xmltodict, atomicwrites 1 Building wheel for ansible (setup.py): started E Building wheel for ansible (setup.py): finished with status 'done'  Created wheel for ansible: filename=ansible-2.7.0-py3-none-any.whl size=9397858 sha256=e15bb508dec279974c1897823536f35b8711b8fb8333b296c1472be61d096e91 k Stored in directory: /root/.cache/pip/wheels/47/ae/e2/4963118b9319c1852dda1225274879f0cdf9b6def3ad58af0c F Building wheel for backports.ssl-match-hostname (setup.py): started Z Building wheel for backports.ssl-match-hostname (setup.py): finished with status 'done'  Created wheel for backports.ssl-match-hostname: filename=backports.ssl_match_hostname-3.5.0.1-py3-none-any.whl size=5206 sha256=5eab3919d527a1bd7b74cfe6f13f0dc04b8e1a4f98f9ee29a0cb4155502ab022 k Stored in directory: /root/.cache/pip/wheels/05/b9/92/62f119d7e440645ddda5da202b010919bef868a3e2d1dae8f4 2 Building wheel for colorama (setup.py): started F Building wheel for colorama (setup.py): finished with status 'done'  Created wheel for colorama: filename=colorama-0.3.3-py3-none-any.whl size=14318 sha256=1db13395e5b7be3ac65e40010fb88f96027b77b294533b0790bb4be499402659 k Stored in directory: /root/.cache/pip/wheels/6e/68/13/00e8c37ba760d796da9dd370cb6e0fb2607efcca89004d81b4 6 Building wheel for configparser (setup.py): started J Building wheel for configparser (setup.py): finished with status 'done'  Created wheel for configparser: filename=configparser-3.5.0-py3-none-any.whl size=20937 sha256=0f75ff60df48fc79ae5c1d0af075731924804aa389fc46e1d99fb88a0ae0f2ba k Stored in directory: /root/.cache/pip/wheels/5f/b6/c5/c6d8ccf999b401c86f31f417db9fe2c7a970e9232847c55a5a 3 Building wheel for couchbase (setup.py): started G Building wheel for couchbase (setup.py): finished with status 'done'  Created wheel for couchbase: filename=couchbase-2.2.6-cp36-cp36m-linux_x86_64.whl size=446075 sha256=07d90734b05fc3969fd9bd36b827a7377592b1b0208c37a09c4d715da5afe1b3 k Stored in directory: /root/.cache/pip/wheels/aa/24/7a/d12ed4e5a7becfe1c007b818e6624ac2c42132d98f1d1e1399 1 Building wheel for futures (setup.py): started E Building wheel for futures (setup.py): finished with status 'done'  Created wheel for futures: filename=futures-3.0.5-py3-none-any.whl size=14078 sha256=3ebd03783bc16d7c419de6d62f30d926e8850ae088b425928e2c6893e9edb01a k Stored in directory: /root/.cache/pip/wheels/15/5e/22/8743ec9845b6656ec6764b97cbcc601404130482f7e87d622f 3 Building wheel for ipaddress (setup.py): started G Building wheel for ipaddress (setup.py): finished with status 'done'  Created wheel for ipaddress: filename=ipaddress-1.0.18-py3-none-any.whl size=18105 sha256=d103598c865c2dbfcd92412b7f3e3c2cf0d2e23aca17b45ba9b3b30b6b8188d1 k Stored in directory: /root/.cache/pip/wheels/bf/27/e1/5a3aeaba1e0e6f06f93c3286068b0239fdf2f69d3a7f74ecf6 ; Building wheel for lazy-object-proxy (setup.py): started O Building wheel for lazy-object-proxy (setup.py): finished with status 'done'  Created wheel for lazy-object-proxy: filename=lazy_object_proxy-1.2.2-cp36-cp36m-linux_x86_64.whl size=41252 sha256=68ca2e18ad38e6fc52e1d10cbff789a9cc82a4e65b68d17e46fdb24d9945434f k Stored in directory: /root/.cache/pip/wheels/00/f8/96/9ca4b706a13a29f42b2513eb7c650144be17d8359b65c0b997 3 Building wheel for netifaces (setup.py): started G Building wheel for netifaces (setup.py): finished with status 'done'  Created wheel for netifaces: filename=netifaces-0.10.4-cp36-cp36m-linux_x86_64.whl size=30256 sha256=678228015b78ae1b0d4992bc95ea273e9da4fe1bbd75887ea568ae3a8738311b k Stored in directory: /root/.cache/pip/wheels/21/f9/d1/a61677c0331785649d38e61abfb6d491f845a57c0e4e707f68 1 Building wheel for olefile (setup.py): started E Building wheel for olefile (setup.py): finished with status 'done'  Created wheel for olefile: filename=olefile-0.44-py3-none-any.whl size=47816 sha256=a929fcccc72f08a7a37ccbd45248d5ba134113db033ac3092e30bb8eb82aab1a k Stored in directory: /root/.cache/pip/wheels/98/77/c6/3d974ba3cb5825fb376485fc5abd5c7a427b85b187a611fecc 3 Building wheel for pycparser (setup.py): started G Building wheel for pycparser (setup.py): finished with status 'done'  Created wheel for pycparser: filename=pycparser-2.17-py2.py3-none-any.whl size=193653 sha256=6f47fdd383105ba143f0170ca6a655dd7716ae268d50870053ee1237a74116bf k Stored in directory: /root/.cache/pip/wheels/4b/e0/b8/95f1c83ec2c52b749560f85f868cdfe290fc0fdd28601b6611 2 Building wheel for pycrypto (setup.py): started F Building wheel for pycrypto (setup.py): finished with status 'done'  Created wheel for pycrypto: filename=pycrypto-2.6.1-cp36-cp36m-linux_x86_64.whl size=495537 sha256=3a0573a709f0b82a69cdd51a332bf0cafb4d495404a38124e5759776d8f2cc3e k Stored in directory: /root/.cache/pip/wheels/41/83/18/ac2fb96b679ded686e75b4270e1df8ea9bd4d2fcbf01642332 8 Building wheel for python-vagrant (setup.py): started L Building wheel for python-vagrant (setup.py): finished with status 'done'  Created wheel for python-vagrant: filename=python_vagrant-0.5.14-py3-none-any.whl size=17842 sha256=6b7ebb3de1868586922ae3a6950fdf3d429791d1476bcfac9f5121929d83d829 k Stored in directory: /root/.cache/pip/wheels/7b/65/b0/3fad90b734d974285a92a536d3d8fe4383df3813fd9120569d 0 Building wheel for PyYAML (setup.py): started D Building wheel for PyYAML (setup.py): finished with status 'done'  Created wheel for PyYAML: filename=PyYAML-3.12-cp36-cp36m-linux_x86_64.whl size=43058 sha256=495c7188771b9b5c610fa35954483c4bbef7982e152b29dcdd1167ed121f1a24 k Stored in directory: /root/.cache/pip/wheels/58/13/a4/e672e27a1ba3320d443b791c3a1203c64f5cb17f2a69dc7438 3 Building wheel for termcolor (setup.py): started G Building wheel for termcolor (setup.py): finished with status 'done'  Created wheel for termcolor: filename=termcolor-1.1.0-py3-none-any.whl size=4832 sha256=2e69260ef2e23a76f4f9e4399ddb2e7ce4807f91bd8bf740e6adcc4e5acf3b3d k Stored in directory: /root/.cache/pip/wheels/93/2a/eb/e58dbcbc963549ee4f065ff80a59f274cc7210b6eab962acdc 5 Building wheel for troposphere (setup.py): started I Building wheel for troposphere (setup.py): finished with status 'done'  Created wheel for troposphere: filename=troposphere-1.5.0-py3-none-any.whl size=51534 sha256=f12b0992094e49762d260e81b1f0877d338fe35f25ef8e1fb0481df7e018c6bb k Stored in directory: /root/.cache/pip/wheels/45/64/5e/e1d77ffae49e6f6d29dc152078b614f1c6fffc7d1bad922608 : Building wheel for websocket-client (setup.py): started N Building wheel for websocket-client (setup.py): finished with status 'done'  Created wheel for websocket-client: filename=websocket_client-0.40.0-py2.py3-none-any.whl size=198285 sha256=e1b5994cf7a4430f04b16f9da5e930c5c703beb96eb27cf40980ba437cb8c683 k Stored in directory: /root/.cache/pip/wheels/55/24/0f/58118a41e34417be0c0793523515faacbd5055db89c673a94f / Building wheel for wrapt (setup.py): started C Building wheel for wrapt (setup.py): finished with status 'done'  Created wheel for wrapt: filename=wrapt-1.10.8-cp36-cp36m-linux_x86_64.whl size=60237 sha256=76f53ecf777b6396c03e69d1008e6c47bb6e41d9bdc6bd92e3e01f72c67489cd k Stored in directory: /root/.cache/pip/wheels/57/23/6e/4fad5a7f96577a12388d5403e2e171f794a1955170195d54b0 3 Building wheel for xmltodict (setup.py): started G Building wheel for xmltodict (setup.py): finished with status 'done'  Created wheel for xmltodict: filename=xmltodict-0.10.2-py3-none-any.whl size=6516 sha256=c8998066434ba3b08fb2ef80e30b189b7ba5ad481dd6c6eba7568252e9f9711a k Stored in directory: /root/.cache/pip/wheels/5d/fd/4c/595855e029d3f59186c3c5082199d5d0e291a382371ab0a223 6 Building wheel for atomicwrites (setup.py): started J Building wheel for atomicwrites (setup.py): finished with status 'done'  Created wheel for atomicwrites: filename=atomicwrites-1.4.1-py2.py3-none-any.whl size=6944 sha256=ab5cc7913bdd12493279cb220fbed8c84b9d03f09d2c2f592c368e73c5ba62b3 k Stored in directory: /root/.cache/pip/wheels/a1/e7/28/46d397595a418eb7ce1a8e6bbdfcea9e73753249bc824cc9cb Successfully built ansible backports.ssl-match-hostname colorama configparser couchbase futures ipaddress lazy-object-proxy netifaces olefile pycparser pycrypto python-vagrant PyYAML termcolor troposphere websocket-client wrapt xmltodict atomicwrites SInstalling collected packages: zipp, typing-extensions, six, pycparser, urllib3, python-dateutil, pyparsing, jmespath, importlib-metadata, idna, docutils, chardet, cffi, certifi, wrapt, wcwidth, requests, python-ntlm3, pyasn1, py, pluggy, packaging, more-itertools, MarkupSafe, lazy-object-proxy, cryptography, botocore, attrs, atomicwrites, xmltodict, websocket-client, s3transfer, rsa, requests-ntlm, PyYAML, pytest, pyflakes, pycodestyle, paramiko, mccabe, Jinja2, isort, docker-pycreds, colorama, astroid, troposphere, termcolor, pywinrm, python-vagrant, pytest-timeout, pytest-rerunfailures, pytest-html, pylint, PyJWT, pycrypto, Pillow, olefile, numpy, netifaces, ipaddress, futures, flake8, enum34, ecdsa, docker, couchbase, configparser, boto3, backports.ssl-match-hostname, backports.functools-lru-cache, awscli, asn1crypto, appdirs, ansible Successfully installed Jinja2-2.9.5 MarkupSafe-1.1.1 Pillow-6.2.0 PyJWT-1.4.0 PyYAML-3.12 ansible-2.7.0 appdirs-1.4.3 asn1crypto-0.22.0 astroid-1.4.8 atomicwrites-1.4.1 attrs-22.2.0 awscli-1.10.14 backports.functools-lru-cache-1.3 backports.ssl-match-hostname-3.5.0.1 boto3-1.3.0 botocore-1.4.5 certifi-2023.7.22 cffi-1.10.0 chardet-3.0.4 colorama-0.3.3 configparser-3.5.0 couchbase-2.2.6 cryptography-2.8 docker-2.2.1 docker-pycreds-0.2.1 docutils-0.12 ecdsa-0.13 enum34-1.1.6 flake8-3.3.0 futures-3.0.5 idna-2.5 importlib-metadata-4.8.3 ipaddress-1.0.18 isort-4.2.5 jmespath-0.9.0 lazy-object-proxy-1.2.2 mccabe-0.6.1 more-itertools-8.14.0 netifaces-0.10.4 numpy-1.17.4 olefile-0.44 packaging-16.8 paramiko-2.1.5 pluggy-0.13.1 py-1.5.1 pyasn1-0.2.3 pycodestyle-2.3.1 pycparser-2.17 pycrypto-2.6.1 pyflakes-1.5.0 pylint-1.6.4 pyparsing-2.2.0 pytest-4.6.9 pytest-html-1.10.0 pytest-rerunfailures-8.0 pytest-timeout-1.0.0 python-dateutil-2.5.1 python-ntlm3-1.0.2 python-vagrant-0.5.14 pywinrm-0.2.1 requests-2.19.0 requests-ntlm-0.3.0 rsa-3.3 s3transfer-0.0.1 six-1.10.0 termcolor-1.1.0 troposphere-1.5.0 typing-extensions-4.1.1 urllib3-1.23 wcwidth-0.2.10 websocket-client-0.40.0 wrapt-1.10.8 xmltodict-0.10.2 zipp-3.6.0 QWARNING: You are using pip version 20.3.1; however, version 21.3.1 is available. nYou should consider upgrading via the '/mobile-testkit/venv/bin/python -m pip install --upgrade pip' command.  + set -e k+ python utilities/sequoia_env_prep.py --ssh-user=root --cbs-hosts=172.23.97.74 --sgw-hosts=172.23.104.254 + cat ./resources/pool.json + cat ansible.cfg {"ips": ["172.23.97.74", "172.23.104.254"], "ip_to_node_type": {"172.23.97.74": "couchbase_servers", "172.23.104.254": "sync_gateways"}}[defaults] remote_user = root host_key_checking = False <+ python libraries/utilities/generate_clusters_from_pool.py ]WARNING:root:WARNING: Skipping config base_di since 3 machines required, but only 2 provided [WARNING:root:WARNING: Skipping config ci_cc since 4 machines required, but only 2 provided [WARNING:root:WARNING: Skipping config ci_di since 6 machines required, but only 2 provided `WARNING:root:WARNING: Skipping config base_lb_cc since 5 machines required, but only 2 provided `WARNING:root:WARNING: Skipping config base_lb_di since 6 machines required, but only 2 provided ^WARNING:root:WARNING: Skipping config ci_lb_cc since 7 machines required, but only 2 provided _WARNING:root:WARNING: Skipping config ci_lb_di since 10 machines required, but only 2 provided aWARNING:root:WARNING: Skipping config 2each_lb_cc since 5 machines required, but only 2 provided aWARNING:root:WARNING: Skipping config 2each_lb_di since 7 machines required, but only 2 provided iWARNING:root:WARNING: Skipping config multiple_servers_cc since 4 machines required, but only 2 provided iWARNING:root:WARNING: Skipping config multiple_servers_di since 5 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config multiple_sg_accels_di since 5 machines required, but only 2 provided oWARNING:root:WARNING: Skipping config multiple_sync_gateways_cc since 3 machines required, but only 2 provided oWARNING:root:WARNING: Skipping config multiple_sync_gateways_di since 4 machines required, but only 2 provided lWARNING:root:WARNING: Skipping config three_sync_gateways_cc since 4 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config four_sync_gateways_cc since 5 machines required, but only 2 provided fWARNING:root:WARNING: Skipping config load_balancer_cc since 4 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config load_balancer_2_cc since 8 machines required, but only 2 provided fWARNING:root:WARNING: Skipping config load_balancer_di since 5 machines required, but only 2 provided _WARNING:root:WARNING: Skipping config 2sgs since 2 sync_gateways required, but only 1 provided CWARNING:root:WARNING: Removing the partially generated config 2sgs cWARNING:root:WARNING: Skipping config 1sg_1cbs_1lgs since 3 machines required, but only 2 provided gWARNING:root:WARNING: Skipping config 1sg_1ac_1cbs_1lgs since 4 machines required, but only 2 provided gWARNING:root:WARNING: Skipping config 1sg_1ac_3cbs_1lgs since 6 machines required, but only 2 provided gWARNING:root:WARNING: Skipping config 1sg_2ac_3cbs_1lgs since 7 machines required, but only 2 provided cWARNING:root:WARNING: Skipping config 1sg_3cbs_1lgs since 5 machines required, but only 2 provided cWARNING:root:WARNING: Skipping config 2sg_1cbs_1lgs since 4 machines required, but only 2 provided cWARNING:root:WARNING: Skipping config 2sg_3cbs_2lgs since 7 machines required, but only 2 provided dWARNING:root:WARNING: Skipping config 2sg_6cbs_2lgs since 10 machines required, but only 2 provided gWARNING:root:WARNING: Skipping config 2sg_2ac_3cbs_1lgs since 8 machines required, but only 2 provided gWARNING:root:WARNING: Skipping config 2sg_2ac_3cbs_2lgs since 9 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 2sg_2ac_6cbs_2lgs since 12 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 2sg_4ac_3cbs_2lgs since 11 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 2sg_8ac_3cbs_2lgs since 15 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 2sg_2ac_6cbs_2lgs since 12 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 2sg_8ac_6cbs_2lgs since 18 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_2ac_3cbs_4lgs since 13 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_2ac_6cbs_4lgs since 16 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_4ac_3cbs_4lgs since 15 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_4ac_6cbs_4lgs since 18 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_8ac_3cbs_4lgs since 19 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 4sg_8ac_6cbs_4lgs since 22 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 8sg_4ac_3cbs_8lgs since 23 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 8sg_4ac_6cbs_8lgs since 26 machines required, but only 2 provided iWARNING:root:WARNING: Skipping config 8sg_4ac_12cbs_8lgs since 32 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 8sg_8ac_3cbs_8lgs since 27 machines required, but only 2 provided hWARNING:root:WARNING: Skipping config 8sg_8ac_6cbs_8lgs since 30 machines required, but only 2 provided iWARNING:root:WARNING: Skipping config 8sg_12ac_3cbs_8lgs since 31 machines required, but only 2 provided jWARNING:root:WARNING: Skipping config 12sg_4ac_6cbs_12lgs since 34 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config 12sg_4ac_12cbs_12lgs since 40 machines required, but only 2 provided jWARNING:root:WARNING: Skipping config 12sg_8ac_6cbs_12lgs since 38 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config 12sg_8ac_12cbs_12lgs since 44 machines required, but only 2 provided jWARNING:root:WARNING: Skipping config 16sg_4ac_3cbs_16lgs since 39 machines required, but only 2 provided jWARNING:root:WARNING: Skipping config 16sg_4ac_6cbs_16lgs since 42 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config 16sg_4ac_12cbs_16lgs since 48 machines required, but only 2 provided :Using the following machines to run functional tests ... #['172.23.97.74', '172.23.104.254'] I{'172.23.97.74': 'couchbase_servers', '172.23.104.254': 'sync_gateways'} =Generating 'resources/cluster_configs/'. Using docker: False (ips: ['172.23.97.74', '172.23.104.254']  Generating config: base_cc GREMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.104.254'] webhook ip: 172.17.0.15 Generating base_cc.json (ips: ['172.23.97.74', '172.23.104.254'] PWARNING: Skipping config base_di since 3 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] NWARNING: Skipping config ci_cc since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] NWARNING: Skipping config ci_di since 6 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] SWARNING: Skipping config base_lb_cc since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] SWARNING: Skipping config base_lb_di since 6 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] QWARNING: Skipping config ci_lb_cc since 7 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] RWARNING: Skipping config ci_lb_di since 10 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] TWARNING: Skipping config 2each_lb_cc since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] TWARNING: Skipping config 2each_lb_di since 7 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] \WARNING: Skipping config multiple_servers_cc since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] \WARNING: Skipping config multiple_servers_di since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config multiple_sg_accels_di since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] bWARNING: Skipping config multiple_sync_gateways_cc since 3 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] bWARNING: Skipping config multiple_sync_gateways_di since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] _WARNING: Skipping config three_sync_gateways_cc since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config four_sync_gateways_cc since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] YWARNING: Skipping config load_balancer_cc since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config load_balancer_2_cc since 8 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] YWARNING: Skipping config load_balancer_di since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254']  Generating config: 1sg WREMOVING 172.23.104.254 and ['172.23.104.254'] from ['172.23.97.74', '172.23.104.254'] webhook ip: 172.17.0.15 Generating 1sg.json (ips: ['172.23.97.74', '172.23.104.254']  Generating config: 2sgs RWARNING: Skipping config 2sgs since 2 sync_gateways required, but only 1 provided 6WARNING: Removing the partially generated config 2sgs (ips: ['172.23.97.74', '172.23.104.254']  Generating config: 1cbs webhook ip: 172.17.0.15 Generating 1cbs.json (ips: ['172.23.97.74', '172.23.104.254'] VWARNING: Skipping config 1sg_1cbs_1lgs since 3 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ZWARNING: Skipping config 1sg_1ac_1cbs_1lgs since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ZWARNING: Skipping config 1sg_1ac_3cbs_1lgs since 6 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ZWARNING: Skipping config 1sg_2ac_3cbs_1lgs since 7 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] VWARNING: Skipping config 1sg_3cbs_1lgs since 5 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] VWARNING: Skipping config 2sg_1cbs_1lgs since 4 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] VWARNING: Skipping config 2sg_3cbs_2lgs since 7 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] WWARNING: Skipping config 2sg_6cbs_2lgs since 10 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ZWARNING: Skipping config 2sg_2ac_3cbs_1lgs since 8 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ZWARNING: Skipping config 2sg_2ac_3cbs_2lgs since 9 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 2sg_2ac_6cbs_2lgs since 12 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 2sg_4ac_3cbs_2lgs since 11 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 2sg_8ac_3cbs_2lgs since 15 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 2sg_2ac_6cbs_2lgs since 12 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 2sg_8ac_6cbs_2lgs since 18 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_2ac_3cbs_4lgs since 13 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_2ac_6cbs_4lgs since 16 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_4ac_3cbs_4lgs since 15 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_4ac_6cbs_4lgs since 18 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_8ac_3cbs_4lgs since 19 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 4sg_8ac_6cbs_4lgs since 22 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 8sg_4ac_3cbs_8lgs since 23 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 8sg_4ac_6cbs_8lgs since 26 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] \WARNING: Skipping config 8sg_4ac_12cbs_8lgs since 32 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] jWARNING:root:WARNING: Skipping config 16sg_8ac_3cbs_16lgs since 43 machines required, but only 2 provided jWARNING:root:WARNING: Skipping config 16sg_8ac_6cbs_16lgs since 46 machines required, but only 2 provided kWARNING:root:WARNING: Skipping config 16sg_8ac_12cbs_16lgs since 52 machines required, but only 2 provided lWARNING:root:WARNING: Skipping config 32sg_16ac_16cbs_32lgs since 96 machines required, but only 2 provided bWARNING:root:WARNING: Skipping config 1sg_2ac_3cbs since 6 machines required, but only 2 provided [WARNING: Skipping config 8sg_8ac_3cbs_8lgs since 27 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] [WARNING: Skipping config 8sg_8ac_6cbs_8lgs since 30 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] \WARNING: Skipping config 8sg_12ac_3cbs_8lgs since 31 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 12sg_4ac_6cbs_12lgs since 34 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config 12sg_4ac_12cbs_12lgs since 40 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 12sg_8ac_6cbs_12lgs since 38 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config 12sg_8ac_12cbs_12lgs since 44 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 16sg_4ac_3cbs_16lgs since 39 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 16sg_4ac_6cbs_16lgs since 42 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config 16sg_4ac_12cbs_16lgs since 48 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 16sg_8ac_3cbs_16lgs since 43 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ]WARNING: Skipping config 16sg_8ac_6cbs_16lgs since 46 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] ^WARNING: Skipping config 16sg_8ac_12cbs_16lgs since 52 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] _WARNING: Skipping config 32sg_16ac_16cbs_32lgs since 96 machines required, but only 2 provided (ips: ['172.23.97.74', '172.23.104.254'] UWARNING: Skipping config 1sg_2ac_3cbs since 6 machines required, but only 2 provided |+ python libraries/utilities/install_keys.py '--public-key-path=~/.ssh/id_rsa.pub' --ssh-user=root --ssh-password=couchbase MDeploying key '~/.ssh/id_rsa.pub' to vms: ['172.23.97.74', '172.23.104.254'] #Deploying key to root@172.23.97.74 %Deploying key to root@172.23.104.254  /mobile-testkit/venv/lib/python3.6/site-packages/paramiko/ecdsakey.py:134: CryptographyDeprecationWarning: Support for unsafe construction of public numbers from encoded data will be removed in a future version. Please use EllipticCurvePublicKey.from_encoded_point , self.ecdsa_curve.curve_class(), pointinfo /mobile-testkit/venv/lib/python3.6/site-packages/paramiko/ecdsakey.py:202: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead. 6 signature, ec.ECDSA(self.ecdsa_curve.hash_object()) '/mobile-testkit/venv/lib64/python3.6/site-packages/cryptography/hazmat/backends/openssl/ciphers.py:114: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct)  operation '/mobile-testkit/venv/lib64/python3.6/site-packages/cryptography/hazmat/backends/openssl/ciphers.py:140: UserWarning: implicit cast from 'char *' to a different pointer type: will be forbidden in the future (check that the types are as you expect; use an explicit ffi.cast() if they are correct) 2 self._backend._ffi.from_buffer(data), len(data) /mobile-testkit/venv/lib/python3.6/site-packages/paramiko/rsakey.py:110: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead.  algorithm=hashes.SHA1(), + '[' false == true ']' + COLLECT_LOGS_FLAG= + echo 'Running system test' + pytest -s -rsx --timeout 864000 --cbs-endpoints=172.23.97.74 --server-version=7.0.0-4291 --sgw-endpoints=172.23.104.254 --sync-gateway-version=2.8.0-374 --server-seed-docs=100000 --max-docs=1200 --num-users=12 --create-batch-size=100 --create-delay=0.1 --update-batch-size=3 --update-docs-percentage=0.1 --update-delay=1 --changes-delay=10 --changes-limit=200 --up-time=86400 testsuites/syncgateway/system/sequoia/test_system_test.py Running system test AERROR: usage: pytest [options] [file_or_dir] [file_or_dir] [...] pytest: error: unrecognized arguments: --cbs-endpoints=172.23.97.74 --server-version=7.0.0-4291 --sgw-endpoints=172.23.104.254 --sync-gateway-version=2.8.0-374 --server-seed-docs=100000 --max-docs=1200 --num-users=12 --create-batch-size=100 --create-delay=0.1 --update-batch-size=3 --update-docs-percentage=0.1 --update-delay=1 --changes-delay=10 --changes-limit=200 --up-time=86400 & inifile: /mobile-testkit/pytest.ini  rootdir: /mobile-testkit  [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T09:53:21-08:00, sequoiatools/couchbase-cli:7.6:3a687e] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.14 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T09:53:39-08:00, sequoiatools/couchbase-cli:7.6:046c19] rebalance -c 172.23.97.74:8091 --server-remove 172.23.96.48 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T10:04:10-08:00, sequoiatools/cmd:def2e8] 60 [pull] sequoiatools/cmd [2023-11-14T10:05:18-08:00, sequoiatools/cmd:648946] 600 → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/queryapp [2023-11-14T10:15:44-08:00, sequoiatools/queryapp:c9977d] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.21/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.120.74 --port 8095 --duration 0 --bucket bucket4 --querycount 50 -a True --analytics_queries catapult_queries --query_timeout 3600 -B [bucket4,bucket5,bucket6,bucket7] [pull] sequoiatools/queryapp [2023-11-14T10:15:51-08:00, sequoiatools/queryapp:41fdd0] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.21/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.120.74 --port 8095 --duration 0 --bucket default --querycount 50 -a True --analytics_queries gideon_queries --query_timeout 3600 -B [default,WAREHOUSE] [pull] sequoiatools/cmd [2023-11-14T10:15:56-08:00, sequoiatools/cmd:f98dd4] 600 ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### eventing : 2 ===== > [172.23.120.58:8091 172.23.120.81:8091] ########### Test cycle: 1 ended after 637 seconds [pull] sequoiatools/cbdozer [pull] sequoiatools/cbdozer [2023-11-14T10:26:30-08:00, sequoiatools/cbdozer:823d3a] -method POST -duration 0 -rate 10 -url http://Administrator:password@172.23.97.105:8093:8095/query/service -body delete from default where rating > 0 limit 10 [pull] sequoiatools/gideon [pull] sequoiatools/gideon [2023-11-14T10:27:04-08:00, sequoiatools/gideon:79a8f4] kv --ops 500 --create 10 --delete 8 --get 92 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 512 128 1024 2048 16000 [pull] sequoiatools/gideon [2023-11-14T10:27:10-08:00, sequoiatools/gideon:22391c] kv --ops 500 --create 100 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 64 [pull] sequoiatools/gideon [2023-11-14T10:27:14-08:00, sequoiatools/gideon:f76f02] kv --ops 600 --create 15 --get 80 --delete 5 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 128 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:27:55-08:00, sequoiatools/couchbase-cli:7.6:c15c24] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.48 -u Administrator -p password --server-add-username Administrator --server-add-password password --services eventing [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:28:14-08:00, sequoiatools/couchbase-cli:7.6:807216] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T10:31:19-08:00, sequoiatools/cmd:0271d6] 60 [pull] sequoiatools/cmd [2023-11-14T10:32:27-08:00, sequoiatools/cmd:95836e] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:37:52-08:00, sequoiatools/couchbase-cli:7.6:356f32] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T10:38:38-08:00, sequoiatools/cmd:6bdc38] 60 [pull] sequoiatools/cmd [2023-11-14T10:39:46-08:00, sequoiatools/cmd:fb6438] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:45:46-08:00, sequoiatools/couchbase-cli:7.6:d6af86] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.81 -u Administrator -p password --server-add-username Administrator --server-add-password password --services eventing [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:46:04-08:00, sequoiatools/couchbase-cli:7.6:ad32ff] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.58 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T10:47:43-08:00, sequoiatools/cmd:cb44d2] 60 ########## Cluster config ################## ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### Test cycle: 1 ended after 1289 seconds [pull] sequoiatools/cmd [2023-11-14T10:48:51-08:00, sequoiatools/cmd:9c4739] 600 → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:59:36-08:00, sequoiatools/couchbase-cli:7.6:708aab] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services analytics [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T10:59:54-08:00, sequoiatools/couchbase-cli:7.6:97d294] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:01:15-08:00, sequoiatools/cmd:eff7d6] 60 [pull] sequoiatools/cmd [2023-11-14T11:02:22-08:00, sequoiatools/cmd:d5adce] 30 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:03:22-08:00, sequoiatools/couchbase-cli:7.6:f5fb4e] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.75 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:04:29-08:00, sequoiatools/cmd:8bb1d0] 60 [pull] sequoiatools/cmd [2023-11-14T11:05:36-08:00, sequoiatools/cmd:8037ed] 30 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:07:08-08:00, sequoiatools/couchbase-cli:7.6:9e2726] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.75 -u Administrator -p password --server-add-username Administrator --server-add-password password --services analytics [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:07:25-08:00, sequoiatools/couchbase-cli:7.6:4f7b71] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.58 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:08:43-08:00, sequoiatools/cmd:5533b4] 60 [pull] sequoiatools/cmd [2023-11-14T11:09:51-08:00, sequoiatools/cmd:36fa93] 300 ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 959 seconds [pull] sequoiatools/cmd [2023-11-14T11:14:59-08:00, sequoiatools/cmd:aa5379] 600 [pull] danihodovic/vegeta [pull] danihodovic/vegeta [2023-11-14T11:25:48-08:00, danihodovic/vegeta:113dcd] bash -c echo GET "http://Administrator:password@172.23.97.74:8092/default/_design/scale/_view/stats?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] danihodovic/vegeta [2023-11-14T11:25:53-08:00, danihodovic/vegeta:b28d86] bash -c echo GET "http://Administrator:password@172.23.96.14:8092/default/_design/scale/_view/array?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] danihodovic/vegeta [2023-11-14T11:25:58-08:00, danihodovic/vegeta:e2fe5d] bash -c echo GET "http://Administrator:password@172.23.97.241:8092/default/_design/scale/_view/padd?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] appropriate/curl [2023-11-14T11:26:04-08:00, appropriate/curl:641966] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.110:8094/api/index/good_state -d { "type": "fulltext-index","name": "SUCCESS","sourceType": "couchbase","sourceName": "default","planParams": { "maxPartitionsPerPIndex": 171 },"params": { "doc_config": { "mode": "type_field","type_field": "result" },"mapping": { "default_mapping": { "enabled": false },"index_dynamic": true,"store_dynamic": false,"types": { "SUCCESS": { "dynamic": false,"enabled": true,"properties": { "state": { "dynamic": false,"enabled": true,"fields": [ { "analyzer": "","include_in_all": true,"include_term_vectors": true,"index": true,"name": "state","store": false,"type": "text" } ] } } } } },"store": { "kvStoreName": "mossStore","indexType": "scorch" } },"sourceParams": {} } [pull] appropriate/curl [2023-11-14T11:26:12-08:00, appropriate/curl:48e98c] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.97.110:8094/api/index/social -d { "type": "fulltext-index","name": "gideon","sourceType": "couchbase","sourceName": "default","planParams": { "maxPartitionsPerPIndex": 171 },"params": { "doc_config": { "mode": "type_field","type_field": "type" },"mapping": { "default_mapping": { "enabled": false },"index_dynamic": true,"store_dynamic": false,"types": { "gideon": { "dynamic": false,"enabled": true,"properties": { "description": { "dynamic": false,"enabled": true,"fields": [ { "analyzer": "","include_in_all": true,"include_term_vectors": true,"index": true,"name": "description","store": true,"type": "text" } ] },"profile": { "dynamic": false,"enabled": true,"properties": { "status": { "dynamic": false,"enabled": true,"fields": [ { "analyzer": "","include_in_all": true,"include_term_vectors": true,"index": true,"name": "status","store": true,"type": "text" } ] } } } } } } },"store": { "kvStoreName": "mossStore","indexType": "scorch" } },"sourceParams": {} } [pull] appropriate/curl [2023-11-14T11:26:17-08:00, appropriate/curl:f09ec8] -s http://Administrator:password@172.23.97.74:8091/pools/default/remoteClusters [pull] appropriate/curl [2023-11-14T11:26:25-08:00, appropriate/curl:12a041] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/default/default -d filterExpression=rating>500 -d filterSkipRestream=0 [pull] appropriate/curl [2023-11-14T11:26:32-08:00, appropriate/curl:23e928] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/default/default -d filterExpression=REGEXP_CONTAINS(META().id,0$) -d filterSkipRestream=0 [pull] appropriate/curl [2023-11-14T11:26:40-08:00, appropriate/curl:21d6ce] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/default/default -d filterExpiration=true -d filterBypassExpiry=true -d filterDeletion=false -d filterExpression=result<>SUCCESS -d filterSkipRestream=1 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/eventing:7.0 [2023-11-14T11:26:49-08:00, sequoiatools/eventing:7.0:9edb23] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o pause [pull] sequoiatools/eventing:7.0 [2023-11-14T11:26:57-08:00, sequoiatools/eventing:7.0:62d03c] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state paused ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### index : 4 ===== > [172.23.123.31:8091 172.23.123.32:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 37 seconds [pull] sequoiatools/pillowfight:7.0 [2023-11-14T11:27:26-08:00, sequoiatools/pillowfight:7.0:fb1e4e] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password → parsed tests/2i/cheshirecat/test_idx_cc_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/n1ql.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:28:13-08:00, sequoiatools/couchbase-cli:7.6:a98b11] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:28:30-08:00, sequoiatools/couchbase-cli:7.6:57bfb4] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:30:32-08:00, sequoiatools/cmd:7b1605] 60 [pull] sequoiatools/cmd [2023-11-14T11:31:41-08:00, sequoiatools/cmd:300ee1] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:37:17-08:00, sequoiatools/couchbase-cli:7.6:16aee7] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.58 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:39:11-08:00, sequoiatools/cmd:7c3a14] 60 [pull] sequoiatools/cmd [2023-11-14T11:40:18-08:00, sequoiatools/cmd:f081bc] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:46:28-08:00, sequoiatools/couchbase-cli:7.6:f83530] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T11:46:46-08:00, sequoiatools/couchbase-cli:7.6:ae9cd6] rebalance -c 172.23.97.74:8091 --server-remove 172.23.123.31 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T11:49:28-08:00, sequoiatools/cmd:d30973] 60 [pull] sequoiatools/cmd [2023-11-14T11:50:35-08:00, sequoiatools/cmd:22de03] 300 [pull] sequoiatools/cbq [pull] sequoiatools/cbq [2023-11-14T11:56:37-08:00, sequoiatools/cbq:ebf9e8] -e=http://172.23.96.243:8093 -u=Administrator -p=password -script=ALTER INDEX `default`.default_claims WITH {"action":"replica_count","num_replica": 3} [pull] sequoiatools/cmd [2023-11-14T11:56:45-08:00, sequoiatools/cmd:2d1179] 300 [pull] sequoiatools/wait_for_idx_build_complete [2023-11-14T12:01:55-08:00, sequoiatools/wait_for_idx_build_complete:54d566] 172.23.120.58 Administrator password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:03:11-08:00, sequoiatools/couchbase-cli:7.6:b91ded] failover -c 172.23.97.74:8091 --server-failover 172.23.96.254:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:03:20-08:00, sequoiatools/couchbase-cli:7.6:040f3d] recovery -c 172.23.97.74:8091 --server-recovery 172.23.96.254:8091 --recovery-type full -u Administrator -p password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:03:28-08:00, sequoiatools/couchbase-cli:7.6:3b563c] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T12:05:18-08:00, sequoiatools/cmd:589a7c] 60 [pull] sequoiatools/cmd [2023-11-14T12:06:25-08:00, sequoiatools/cmd:8ceeba] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:11:52-08:00, sequoiatools/couchbase-cli:7.6:68eb25] failover -c 172.23.97.74:8091 --server-failover 172.23.123.32:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:12:00-08:00, sequoiatools/couchbase-cli:7.6:4e1f02] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T12:13:27-08:00, sequoiatools/cmd:94b55f] 60 [pull] sequoiatools/cmd [2023-11-14T12:14:35-08:00, sequoiatools/cmd:6be1e5] 300 [pull] sequoiatools/cbq [2023-11-14T12:19:59-08:00, sequoiatools/cbq:3d6686] -e=http://172.23.96.243:8093 -u=Administrator -p=password -script=ALTER INDEX `default`.default_claims WITH {"action":"replica_count","num_replica": 2} [pull] sequoiatools/cmd [2023-11-14T12:20:07-08:00, sequoiatools/cmd:790a45] 300 [pull] sequoiatools/wait_for_idx_build_complete [2023-11-14T12:25:17-08:00, sequoiatools/wait_for_idx_build_complete:efaff4] 172.23.120.58 Administrator password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:26:42-08:00, sequoiatools/couchbase-cli:7.6:076d38] server-add -c 172.23.97.74:8091 --server-add https://172.23.123.31 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:26:59-08:00, sequoiatools/couchbase-cli:7.6:276f97] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T12:29:46-08:00, sequoiatools/cmd:af0835] 60 [pull] sequoiatools/cmd [2023-11-14T12:30:53-08:00, sequoiatools/cmd:8b39fc] 300 ########## Cluster config ################## ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### Test cycle: 1 ended after 4105 seconds [pull] sequoiatools/cmd [2023-11-14T12:36:01-08:00, sequoiatools/cmd:9a09bb] 600 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/eventing:7.0 [2023-11-14T12:46:28-08:00, sequoiatools/eventing:7.0:c8f814] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o resume [pull] sequoiatools/eventing:7.0 [2023-11-14T12:46:40-08:00, sequoiatools/eventing:7.0:391eaa] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state deployed ########## Cluster config ################## ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.96.122:8091 172.23.96.14:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### fts : 2 ===== > [172.23.97.110:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 88 seconds [pull] sequoiatools/pillowfight:7.0 [2023-11-14T12:47:39-08:00, sequoiatools/pillowfight:7.0:6ce02b] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:48:05-08:00, sequoiatools/couchbase-cli:7.6:ca72aa] server-add -c 172.23.97.74:8091 --server-add https://172.23.123.32 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:48:33-08:00, sequoiatools/couchbase-cli:7.6:627d36] failover -c 172.23.97.74:8091 --server-failover 172.23.96.14:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T12:48:42-08:00, sequoiatools/couchbase-cli:7.6:3b323d] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T13:33:24-08:00, sequoiatools/cmd:1cabdf] 60 [pull] sequoiatools/cmd [2023-11-14T13:34:32-08:00, sequoiatools/cmd:861b8f] 600 [pull] appropriate/curl [2023-11-14T13:44:39-08:00, appropriate/curl:47a54a] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/bucket8/bucket8 -d pauseRequested=true [pull] sequoiatools/cmd [2023-11-14T13:44:47-08:00, sequoiatools/cmd:37eae5] 300 [pull] appropriate/curl [2023-11-14T13:49:54-08:00, appropriate/curl:23728c] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/bucket8/bucket8 -d pauseRequested=false [pull] sequoiatools/gideon [2023-11-14T13:50:02-08:00, sequoiatools/gideon:26a34f] kv --ops 500 --create 100 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 64 [pull] sequoiatools/pillowfight:7.0 [2023-11-14T13:50:06-08:00, sequoiatools/pillowfight:7.0:2dc22f] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T13:50:33-08:00, sequoiatools/couchbase-cli:7.6:a01b4c] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.14 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T13:51:18-08:00, sequoiatools/couchbase-cli:7.6:71dde7] failover -c 172.23.97.74:8091 --server-failover 172.23.96.122:8091 -u Administrator -p password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T13:56:58-08:00, sequoiatools/couchbase-cli:7.6:4d16dc] failover -c 172.23.97.74:8091 --server-failover 172.23.121.77:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T13:57:07-08:00, sequoiatools/couchbase-cli:7.6:0ef965] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T14:39:58-08:00, sequoiatools/cmd:5bc2dd] 60 [pull] sequoiatools/cmd [2023-11-14T14:41:05-08:00, sequoiatools/cmd:eed9a1] 600 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T14:53:08-08:00, sequoiatools/couchbase-cli:7.6:cae3ce] setting-autofailover -c 172.23.97.74:8091 -u Administrator -p password --enable-auto-failover=1 --auto-failover-timeout=5 --max-failovers=1 [pull] sequoiatools/cmd [2023-11-14T14:53:16-08:00, sequoiatools/cmd:7b752b] 10 [pull] sequoiatools/cbutil [pull] sequoiatools/cbutil [2023-11-14T14:53:54-08:00, sequoiatools/cbutil:d124e9] /cbinit.py 172.23.96.14 root couchbase stop [pull] sequoiatools/cmd [2023-11-14T14:54:09-08:00, sequoiatools/cmd:4b5795] 10 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T14:54:27-08:00, sequoiatools/couchbase-cli:7.6:3e02e4] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T15:39:18-08:00, sequoiatools/cmd:3be15e] 60 [pull] sequoiatools/cmd [2023-11-14T15:40:26-08:00, sequoiatools/cmd:be2b62] 180 [pull] sequoiatools/cbutil [2023-11-14T15:43:33-08:00, sequoiatools/cbutil:72dc2d] /cbinit.py 172.23.96.14 root couchbase start [pull] sequoiatools/cmd [2023-11-14T15:43:41-08:00, sequoiatools/cmd:4e3bb1] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T15:48:49-08:00, sequoiatools/couchbase-cli:7.6:b78850] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.14 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T15:49:06-08:00, sequoiatools/couchbase-cli:7.6:690d0d] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T16:21:07-08:00, sequoiatools/cmd:ea82a7] 60 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:22:15-08:00, sequoiatools/couchbase-cli:7.6:97f0fe] setting-autofailover -c 172.23.97.74:8091 -u Administrator -p password --enable-auto-failover=0 [pull] sequoiatools/cmd [2023-11-14T16:22:23-08:00, sequoiatools/cmd:552faa] 600 [pull] sequoiatools/pillowfight:7.0 [2023-11-14T16:32:30-08:00, sequoiatools/pillowfight:7.0:5b11ae] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:33:13-08:00, sequoiatools/couchbase-cli:7.6:95f840] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.122 -u Administrator -p password --server-add-username Administrator --server-add-password password --services fts [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:33:30-08:00, sequoiatools/couchbase-cli:7.6:dcb673] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T16:36:11-08:00, sequoiatools/cmd:8b4873] 60 [pull] sequoiatools/cmd [2023-11-14T16:37:19-08:00, sequoiatools/cmd:b84260] 900 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:52:43-08:00, sequoiatools/couchbase-cli:7.6:d324a6] failover -c 172.23.97.74:8091 --server-failover 172.23.97.110:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:53:10-08:00, sequoiatools/couchbase-cli:7.6:45e460] recovery -c 172.23.97.74:8091 --server-recovery 172.23.97.110:8091 --recovery-type full -u Administrator -p password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T16:53:15-08:00, sequoiatools/couchbase-cli:7.6:1f3db6] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T16:58:14-08:00, sequoiatools/cmd:bbcf10] 60 [pull] sequoiatools/cmd [2023-11-14T16:59:22-08:00, sequoiatools/cmd:fd7fd2] 900 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T17:14:47-08:00, sequoiatools/couchbase-cli:7.6:a659d2] failover -c 172.23.97.74:8091 --server-failover 172.23.97.110:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T17:14:56-08:00, sequoiatools/couchbase-cli:7.6:64f45b] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T17:18:28-08:00, sequoiatools/cmd:da9e16] 60 [pull] sequoiatools/cmd [2023-11-14T17:19:36-08:00, sequoiatools/cmd:e4c430] 900 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T17:35:03-08:00, sequoiatools/couchbase-cli:7.6:45b042] server-add -c 172.23.97.74:8091 --server-add https://172.23.121.77 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T17:35:37-08:00, sequoiatools/couchbase-cli:7.6:24f036] server-add -c 172.23.97.74:8091 --server-add https://172.23.97.110 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T17:35:55-08:00, sequoiatools/couchbase-cli:7.6:fc3999] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T18:02:43-08:00, sequoiatools/cmd:692649] 60 [pull] sequoiatools/cmd [2023-11-14T18:03:51-08:00, sequoiatools/cmd:fc8887] 600 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/eventing:7.0 [2023-11-14T18:14:16-08:00, sequoiatools/eventing:7.0:df931d] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o undeploy [pull] sequoiatools/eventing:7.0 [2023-11-14T18:14:24-08:00, sequoiatools/eventing:7.0:5f4c99] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state undeployed [pull] sequoiatools/eventing:7.0 [2023-11-14T18:16:33-08:00, sequoiatools/eventing:7.0:0cb61b] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o delete ########## Cluster config ################## ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 11 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 161 seconds → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml → remove cbas_queries_1 → remove cbas_queries_2 [pull] sequoiatools/analyticsmanager:1.0 [2023-11-14T18:16:45-08:00, sequoiatools/analyticsmanager:1.0:63462c] -i 172.23.120.74 -b default,WAREHOUSE,bucket4,bucket5,bucket6,bucket7 -o drop_cbas_infra --api_timeout 3600 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T18:19:07-08:00, sequoiatools/couchbase-cli:7.6:02d4f7] server-add -c 172.23.97.74:8091 --server-add https://172.23.97.149 -u Administrator -p password --server-add-username Administrator --server-add-password password --services analytics [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T18:19:24-08:00, sequoiatools/couchbase-cli:7.6:e05020] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.75 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T18:20:01-08:00, sequoiatools/cmd:aa91dc] 60 [pull] sequoiatools/cmd [2023-11-14T18:21:08-08:00, sequoiatools/cmd:ea64b1] 300 ########## Cluster config ################## ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 11 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### Test cycle: 1 ended after 573 seconds → remove collection_crud1 → remove collection_crud2 → remove catapult_bucket4_doc_ops2 → remove catapult_bucket5_doc_ops2 → remove catapult_bucket6_doc_ops2 → remove catapult_bucket7_doc_ops2 → remove catapult_bucket4_doc_ops1 → remove catapult_bucket5_doc_ops1 → remove catapult_bucket6_doc_ops1 → remove catapult_bucket7_doc_ops1 → remove catapult_bucket8_doc_ops1 → remove catapult_bucket9_doc_ops1 → remove txn [pull] sequoiatools/cmd [2023-11-14T18:26:19-08:00, sequoiatools/cmd:fac369] 1200 [pull] sequoiatools/xdcrmanager [pull] sequoiatools/xdcrmanager [2023-11-14T18:46:59-08:00, sequoiatools/xdcrmanager:78c8c5] -n 172.23.97.74 -o 8091 -u Administrator -p password -a validate -rn 172.23.106.136 -ro 8091 -ru Administrator -rp password -b bucket4 -rb bucket4 [pull] sequoiatools/xdcrmanager [2023-11-14T18:47:06-08:00, sequoiatools/xdcrmanager:2276a0] -n 172.23.97.74 -o 8091 -u Administrator -p password -a validate -rn 172.23.106.136 -ro 8091 -ru Administrator -rp password -b bucket8 -rb bucket8 [pull] sequoiatools/xdcrmanager [2023-11-14T18:47:14-08:00, sequoiatools/xdcrmanager:58fda9] -n 172.23.97.74 -o 8091 -u Administrator -p password -a validate -rn 172.23.106.136 -ro 8091 -ru Administrator -rp password -b bucket9 -rb bucket9 [pull] sequoiatools/indexmanager [2023-11-14T18:47:22-08:00, sequoiatools/indexmanager:25e7f6] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket4 -a item_count_check --sample_size 10 [pull] sequoiatools/indexmanager [2023-11-14T18:47:41-08:00, sequoiatools/indexmanager:5197a9] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a item_count_check --sample_size 10 → Error occurred on container - sequoiatools/indexmanager:[-n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a item_count_check --sample_size 10] docker logs 5197a9 docker start 5197a9 k2023-11-14 18:47:42,020 - indexmanager - INFO - Capella flag is set to False. Use tls flag is set to False  j2023-11-14 18:47:42,020 - indexmanager - INFO - Indexes will be chosen at random from the sample statements [{'indexname': 'idx1', 'statement': 'CREATE INDEX `idx1_idxprefix` ON keyspacenameplaceholder(country, DISTINCT ARRAY `r`.`ratings`.`Check in / front desk` FOR r in `reviews` END,array_count((`public_likes`)),array_count((`reviews`)) DESC,`type`,`phone`,`price`,`email`,`address`,`name`,`url`) '}, {'indexname': 'idx2', 'statement': 'CREATE INDEX `idx2_idxprefix` ON keyspacenameplaceholder(`free_breakfast`,`type`,`free_parking`,array_count((`public_likes`)),`price`,`country`)'}, {'indexname': 'idx3', 'statement': 'CREATE INDEX `idx3_idxprefix` ON keyspacenameplaceholder(`free_breakfast`,`free_parking`,`country`,`city`) '}, {'indexname': 'idx4', 'statement': 'CREATE INDEX `idx4_idxprefix` ON keyspacenameplaceholder(`price`,`city`,`name`)'}, {'indexname': 'idx5', 'statement': 'CREATE INDEX `idx5_idxprefix` ON keyspacenameplaceholder(ALL ARRAY `r`.`ratings`.`Rooms` FOR r IN `reviews` END,`avg_rating`)'}, {'indexname': 'idx6', 'statement': 'CREATE INDEX `idx6_idxprefix` ON keyspacenameplaceholder(`city`)'}, {'indexname': 'idx7', 'statement': 'CREATE INDEX `idx7_idxprefix` ON keyspacenameplaceholder(`price`,`name`,`city`,`country`)'}, {'indexname': 'idx8', 'statement': 'CREATE INDEX `idx8_idxprefix` ON keyspacenameplaceholder(DISTINCT ARRAY FLATTEN_KEYS(`r`.`author`,`r`.`ratings`.`Cleanliness`) FOR r IN `reviews` when `r`.`ratings`.`Cleanliness` < 4 END, `country`, `email`, `free_parking`)'}, {'indexname': 'idx9', 'statement': 'CREATE INDEX `idx9_idxprefix` ON keyspacenameplaceholder(ALL ARRAY FLATTEN_KEYS(`r`.`author`,`r`.`ratings`.`Rooms`) FOR r IN `reviews` END, `free_parking`)'}, {'indexname': 'idx10', 'statement': 'CREATE INDEX `idx10_idxprefix` ON keyspacenameplaceholder((ALL (ARRAY(ALL (ARRAY flatten_keys(n,v) FOR n:v IN (`r`.`ratings`) END)) FOR `r` IN `reviews` END)))'}, {'indexname': 'idx11', 'statement': 'CREATE INDEX `idx11_idxprefix` ON keyspacenameplaceholder(ALL ARRAY FLATTEN_KEYS(`r`.`ratings`.`Rooms`,`r`.`ratings`.`Cleanliness`) FOR r IN `reviews` END, `email`, `free_parking`)'}, {'indexname': 'idx12', 'statement': 'CREATE INDEX `idx12_idxprefix` ON keyspacenameplaceholder(`name` INCLUDE MISSING DESC,`phone`,`type`)'}, {'indexname': 'idx13', 'statement': 'CREATE INDEX `idx13_idxprefix` ON keyspacenameplaceholder(`city` INCLUDE MISSING ASC, `phone`)'}] 2023-11-14 18:47:42,020 - indexmanager - INFO - This is a Server run. Will create cluster object against server 172.23.97.74 with username Administrator password password 2023-11-14 18:47:42,548 - indexmanager - INFO - Results from system:buckets [{'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'ITEM', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:ITEM'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'NEW_ORDER', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:NEW_ORDER'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'WAREHOUSE', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:WAREHOUSE'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket4', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket4'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket5', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket5'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket6', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket6'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket7', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket7'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 420, 'name': 'bucket8', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket8'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 402, 'name': 'bucket9', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket9'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'default', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:default'}}] c2023-11-14 18:47:42,548 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:47:42,646 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.56, 'cpuUsage': 34.05, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.91, 'cpuUsage': 42.44, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.6, 'cpuUsage': 2.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.08, 'cpuUsage': 40.85, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.19, 'cpuUsage': 2.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.43, 'cpuUsage': 43.64, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 32.1, 'cpuUsage': 40.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.76, 'cpuUsage': 39.11, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.56, 'cpuUsage': 43.09, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.07, 'cpuUsage': 24.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.79, 'cpuUsage': 46.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.71, 'cpuUsage': 3.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.14, 'cpuUsage': 6.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.49, 'cpuUsage': 38.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.78, 'cpuUsage': 22.81, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.32, 'cpuUsage': 28.67, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.58, 'cpuUsage': 3.01, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.22, 'cpuUsage': 22.9, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.15, 'cpuUsage': 41.53, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.11, 'cpuUsage': 34.49, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 41.15, 'cpuUsage': 7.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.21, 'cpuUsage': 2.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.63, 'cpuUsage': 39.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.34, 'cpuUsage': 39.49, 'status': 'healthy', 'clusterMembership': 'active'}] c2023-11-14 18:47:42,647 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:47:42,670 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.56, 'cpuUsage': 34.05, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.91, 'cpuUsage': 42.44, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.6, 'cpuUsage': 2.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.08, 'cpuUsage': 40.85, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.19, 'cpuUsage': 2.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.43, 'cpuUsage': 43.64, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 32.1, 'cpuUsage': 40.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.76, 'cpuUsage': 39.11, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.56, 'cpuUsage': 43.09, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.07, 'cpuUsage': 24.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.79, 'cpuUsage': 46.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.71, 'cpuUsage': 3.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.14, 'cpuUsage': 6.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.49, 'cpuUsage': 38.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.78, 'cpuUsage': 22.81, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.32, 'cpuUsage': 28.67, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.58, 'cpuUsage': 3.01, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.22, 'cpuUsage': 22.9, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.15, 'cpuUsage': 41.53, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.11, 'cpuUsage': 34.49, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 41.15, 'cpuUsage': 7.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.21, 'cpuUsage': 2.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.63, 'cpuUsage': 39.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.34, 'cpuUsage': 39.49, 'status': 'healthy', 'clusterMembership': 'active'}] 2023-11-14 18:47:42,670 - indexmanager - INFO - N1QL nodes ['172.23.96.243', '172.23.97.105'] and Index nodes : ['172.23.120.58', '172.23.123.31', '172.23.96.254', '172.23.97.112'] c2023-11-14 18:47:42,671 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:47:42,695 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.56, 'cpuUsage': 34.05, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.91, 'cpuUsage': 42.44, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.6, 'cpuUsage': 2.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.08, 'cpuUsage': 40.85, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.19, 'cpuUsage': 2.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.43, 'cpuUsage': 43.64, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 32.1, 'cpuUsage': 40.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.76, 'cpuUsage': 39.11, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.56, 'cpuUsage': 43.09, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.07, 'cpuUsage': 24.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.79, 'cpuUsage': 46.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.71, 'cpuUsage': 3.76, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.14, 'cpuUsage': 6.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.49, 'cpuUsage': 38.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.78, 'cpuUsage': 22.81, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.32, 'cpuUsage': 28.67, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.58, 'cpuUsage': 3.01, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.22, 'cpuUsage': 22.9, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.15, 'cpuUsage': 41.53, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.11, 'cpuUsage': 34.49, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 41.15, 'cpuUsage': 7.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.21, 'cpuUsage': 2.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.63, 'cpuUsage': 39.93, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.34, 'cpuUsage': 39.49, 'status': 'healthy', 'clusterMembership': 'active'}] Y2023-11-14 18:47:42,696 - indexmanager - INFO - Setting Max Replica for this test to : 3 2023-11-14 18:47:52,723 - indexmanager - INFO - ['`bucket5`.`scope_1`.`coll_4`', '`bucket5`.`scope_1`.`coll_3`', '`bucket5`.`scope_1`.`coll_2`', '`bucket5`.`scope_1`.`coll_1`', '`bucket5`.`scope_1`.`coll_0`', '`bucket5`.`scope_0`.`coll_4`', '`bucket5`.`scope_0`.`coll_3`', '`bucket5`.`scope_0`.`coll_2`', '`bucket5`.`scope_0`.`coll_1`', '`bucket5`.`scope_0`.`coll_0`', '`bucket5`.`_default`.`_default`', '`bucket5`.`_system`.`_query`', '`bucket5`.`_system`.`_mobile`'] c2023-11-14 18:47:52,723 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:47:52,811 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.63, 'cpuUsage': 33.86, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 33.01, 'cpuUsage': 41.74, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.6, 'cpuUsage': 3.14, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 35.91, 'cpuUsage': 39.49, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.25, 'cpuUsage': 2.82, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.54, 'cpuUsage': 42.33, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 31.97, 'cpuUsage': 41.08, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.6, 'cpuUsage': 40.23, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.45, 'cpuUsage': 43.96, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 15.95, 'cpuUsage': 25.91, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.53, 'cpuUsage': 38.67, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.56, 'cpuUsage': 4.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.2, 'cpuUsage': 7.13, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.3, 'cpuUsage': 38.63, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.8, 'cpuUsage': 22.69, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.06, 'cpuUsage': 28.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.58, 'cpuUsage': 3.12, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.52, 'cpuUsage': 23.19, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.0, 'cpuUsage': 40.42, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.06, 'cpuUsage': 34.79, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 40.98, 'cpuUsage': 8.6, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.13, 'cpuUsage': 2.32, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.66, 'cpuUsage': 39.21, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.49, 'cpuUsage': 39.24, 'status': 'healthy', 'clusterMembership': 'active'}] w2023-11-14 18:47:52,811 - indexmanager - INFO - URL used for get_index_map is http://172.23.120.58:9102/getIndexStatus 2023-11-14 18:47:53,015 - indexmanager - INFO - Item count for index idx11_ZbcOR (replica 1) on `bucket5`.`_system`.`_query` is 0. Pending Mutations = 0 Total items in collection are 97 2023-11-14 18:47:53,093 - indexmanager - INFO - Item count for index idx5_5R3lKgP (replica 3) on `bucket5`.`_system`.`_query` is 0. Pending Mutations = 0 Total items in collection are 97 2023-11-14 18:47:53,140 - indexmanager - INFO - Item count for index idx2_IAOZSj (replica 1) on `bucket5`.`scope_1`.`coll_2` is 0. Pending Mutations = 0 Total items in collection are 0 2023-11-14 18:47:53,173 - indexmanager - INFO - Item count for index idx5_5R3lKgP on `bucket5`.`_system`.`_query` is 0. Pending Mutations = 0 Total items in collection are 97 2023-11-14 18:47:53,188 - indexmanager - INFO - Item count for index idx11_ZbcOR on `bucket5`.`_system`.`_query` is 0. Pending Mutations = 0 Total items in collection are 97 2023-11-14 18:47:53,235 - indexmanager - INFO - Item count for index idx2_IAOZSj (replica 3) on `bucket5`.`scope_1`.`coll_2` is 0. Pending Mutations = 0 Total items in collection are 0 2023-11-14 18:47:53,252 - indexmanager - INFO - Item count for index #primary on `bucket5`.`_system`.`_query` is 97. Pending Mutations = 0 Total items in collection are 97 2023-11-14 18:47:53,268 - indexmanager - INFO - Item count for index idx1_XuQy086Q on `bucket5`.`scope_0`.`coll_2` is 0. Pending Mutations = 0 Total items in collection are 0 #Traceback (most recent call last): 2 File "/indexmanager.py", line 1648, in 4 indexMgr.item_count_check(indexMgr.sample_size) 9 File "/indexmanager.py", line 910, in item_count_check ^ raise Exception("There were errors in the item count check phase - \n{0}".format(errors)) >Exception: There were errors in the item count check phase - [{'type': 'item_count_check_failed', 'index_name': 'idx11_ZbcOR (replica 1)', 'keyspace': '`bucket5`.`_system`.`_query`', 'index_item_count': 0, 'index_pending_mutations': 0, 'kv_item_count': 97}, {'type': 'item_count_check_failed', 'index_name': 'idx5_5R3lKgP (replica 3)', 'keyspace': '`bucket5`.`_system`.`_query`', 'index_item_count': 0, 'index_pending_mutations': 0, 'kv_item_count': 97}, {'type': 'item_count_check_failed', 'index_name': 'idx5_5R3lKgP', 'keyspace': '`bucket5`.`_system`.`_query`', 'index_item_count': 0, 'index_pending_mutations': 0, 'kv_item_count': 97}, {'type': 'item_count_check_failed', 'index_name': 'idx11_ZbcOR', 'keyspace': '`bucket5`.`_system`.`_query`', 'index_item_count': 0, 'index_pending_mutations': 0, 'kv_item_count': 97}] [pull] sequoiatools/indexmanager [2023-11-14T18:48:00-08:00, sequoiatools/indexmanager:99b766] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a item_count_check --sample_size 10 [pull] sequoiatools/indexmanager [2023-11-14T18:48:20-08:00, sequoiatools/indexmanager:f25ed2] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a item_count_check --sample_size 10 → Error occurred on container - sequoiatools/indexmanager:[-n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a item_count_check --sample_size 10] docker logs f25ed2 docker start f25ed2 k2023-11-14 18:48:21,306 - indexmanager - INFO - Capella flag is set to False. Use tls flag is set to False  j2023-11-14 18:48:21,306 - indexmanager - INFO - Indexes will be chosen at random from the sample statements [{'indexname': 'idx1', 'statement': 'CREATE INDEX `idx1_idxprefix` ON keyspacenameplaceholder(country, DISTINCT ARRAY `r`.`ratings`.`Check in / front desk` FOR r in `reviews` END,array_count((`public_likes`)),array_count((`reviews`)) DESC,`type`,`phone`,`price`,`email`,`address`,`name`,`url`) '}, {'indexname': 'idx2', 'statement': 'CREATE INDEX `idx2_idxprefix` ON keyspacenameplaceholder(`free_breakfast`,`type`,`free_parking`,array_count((`public_likes`)),`price`,`country`)'}, {'indexname': 'idx3', 'statement': 'CREATE INDEX `idx3_idxprefix` ON keyspacenameplaceholder(`free_breakfast`,`free_parking`,`country`,`city`) '}, {'indexname': 'idx4', 'statement': 'CREATE INDEX `idx4_idxprefix` ON keyspacenameplaceholder(`price`,`city`,`name`)'}, {'indexname': 'idx5', 'statement': 'CREATE INDEX `idx5_idxprefix` ON keyspacenameplaceholder(ALL ARRAY `r`.`ratings`.`Rooms` FOR r IN `reviews` END,`avg_rating`)'}, {'indexname': 'idx6', 'statement': 'CREATE INDEX `idx6_idxprefix` ON keyspacenameplaceholder(`city`)'}, {'indexname': 'idx7', 'statement': 'CREATE INDEX `idx7_idxprefix` ON keyspacenameplaceholder(`price`,`name`,`city`,`country`)'}, {'indexname': 'idx8', 'statement': 'CREATE INDEX `idx8_idxprefix` ON keyspacenameplaceholder(DISTINCT ARRAY FLATTEN_KEYS(`r`.`author`,`r`.`ratings`.`Cleanliness`) FOR r IN `reviews` when `r`.`ratings`.`Cleanliness` < 4 END, `country`, `email`, `free_parking`)'}, {'indexname': 'idx9', 'statement': 'CREATE INDEX `idx9_idxprefix` ON keyspacenameplaceholder(ALL ARRAY FLATTEN_KEYS(`r`.`author`,`r`.`ratings`.`Rooms`) FOR r IN `reviews` END, `free_parking`)'}, {'indexname': 'idx10', 'statement': 'CREATE INDEX `idx10_idxprefix` ON keyspacenameplaceholder((ALL (ARRAY(ALL (ARRAY flatten_keys(n,v) FOR n:v IN (`r`.`ratings`) END)) FOR `r` IN `reviews` END)))'}, {'indexname': 'idx11', 'statement': 'CREATE INDEX `idx11_idxprefix` ON keyspacenameplaceholder(ALL ARRAY FLATTEN_KEYS(`r`.`ratings`.`Rooms`,`r`.`ratings`.`Cleanliness`) FOR r IN `reviews` END, `email`, `free_parking`)'}, {'indexname': 'idx12', 'statement': 'CREATE INDEX `idx12_idxprefix` ON keyspacenameplaceholder(`name` INCLUDE MISSING DESC,`phone`,`type`)'}, {'indexname': 'idx13', 'statement': 'CREATE INDEX `idx13_idxprefix` ON keyspacenameplaceholder(`city` INCLUDE MISSING ASC, `phone`)'}] 2023-11-14 18:48:21,306 - indexmanager - INFO - This is a Server run. Will create cluster object against server 172.23.97.74 with username Administrator password password 2023-11-14 18:48:21,784 - indexmanager - INFO - Results from system:buckets [{'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'ITEM', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:ITEM'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'NEW_ORDER', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:NEW_ORDER'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'WAREHOUSE', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:WAREHOUSE'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket4', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket4'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket5', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket5'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket6', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket6'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 13, 'name': 'bucket7', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket7'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 420, 'name': 'bucket8', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket8'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 402, 'name': 'bucket9', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:bucket9'}}, {'buckets': {'datastore_id': 'http://127.0.0.1:8091', 'manifest_id': 6, 'name': 'default', 'namespace': 'default', 'namespace_id': 'default', 'path': 'default:default'}}] c2023-11-14 18:48:21,784 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:48:21,872 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.44, 'cpuUsage': 33.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.74, 'cpuUsage': 42.61, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.62, 'cpuUsage': 2.41, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.12, 'cpuUsage': 39.66, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.26, 'cpuUsage': 2.39, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.34, 'cpuUsage': 42.65, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 31.88, 'cpuUsage': 40.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.78, 'cpuUsage': 42.04, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.67, 'cpuUsage': 46.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.37, 'cpuUsage': 26.42, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.8, 'cpuUsage': 40.24, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.5, 'cpuUsage': 3.69, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.17, 'cpuUsage': 6.14, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.44, 'cpuUsage': 40.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.74, 'cpuUsage': 22.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.15, 'cpuUsage': 28.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.63, 'cpuUsage': 2.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.28, 'cpuUsage': 22.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.02, 'cpuUsage': 42.88, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.12, 'cpuUsage': 34.77, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 40.95, 'cpuUsage': 7.45, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.09, 'cpuUsage': 1.97, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.57, 'cpuUsage': 40.7, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.35, 'cpuUsage': 41.04, 'status': 'healthy', 'clusterMembership': 'active'}] c2023-11-14 18:48:21,872 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:48:21,895 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.44, 'cpuUsage': 33.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.74, 'cpuUsage': 42.61, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.62, 'cpuUsage': 2.41, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.12, 'cpuUsage': 39.66, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.26, 'cpuUsage': 2.39, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.34, 'cpuUsage': 42.65, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 31.88, 'cpuUsage': 40.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.78, 'cpuUsage': 42.04, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.67, 'cpuUsage': 46.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.37, 'cpuUsage': 26.42, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.8, 'cpuUsage': 40.24, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.5, 'cpuUsage': 3.69, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.17, 'cpuUsage': 6.14, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.44, 'cpuUsage': 40.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.74, 'cpuUsage': 22.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.15, 'cpuUsage': 28.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.63, 'cpuUsage': 2.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.28, 'cpuUsage': 22.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.02, 'cpuUsage': 42.88, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.12, 'cpuUsage': 34.77, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 40.95, 'cpuUsage': 7.45, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.09, 'cpuUsage': 1.97, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.57, 'cpuUsage': 40.7, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.35, 'cpuUsage': 41.04, 'status': 'healthy', 'clusterMembership': 'active'}] 2023-11-14 18:48:21,895 - indexmanager - INFO - N1QL nodes ['172.23.96.243', '172.23.97.105'] and Index nodes : ['172.23.120.58', '172.23.123.31', '172.23.96.254', '172.23.97.112'] c2023-11-14 18:48:21,895 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:48:21,918 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.44, 'cpuUsage': 33.34, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 32.74, 'cpuUsage': 42.61, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.62, 'cpuUsage': 2.41, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 36.12, 'cpuUsage': 39.66, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.26, 'cpuUsage': 2.39, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.34, 'cpuUsage': 42.65, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 31.88, 'cpuUsage': 40.68, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.78, 'cpuUsage': 42.04, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.67, 'cpuUsage': 46.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.37, 'cpuUsage': 26.42, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.8, 'cpuUsage': 40.24, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.5, 'cpuUsage': 3.69, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.17, 'cpuUsage': 6.14, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.44, 'cpuUsage': 40.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.74, 'cpuUsage': 22.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.15, 'cpuUsage': 28.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.63, 'cpuUsage': 2.56, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.28, 'cpuUsage': 22.06, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 31.02, 'cpuUsage': 42.88, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.12, 'cpuUsage': 34.77, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 40.95, 'cpuUsage': 7.45, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.09, 'cpuUsage': 1.97, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.57, 'cpuUsage': 40.7, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.35, 'cpuUsage': 41.04, 'status': 'healthy', 'clusterMembership': 'active'}] Y2023-11-14 18:48:21,918 - indexmanager - INFO - Setting Max Replica for this test to : 3 2023-11-14 18:48:31,940 - indexmanager - INFO - ['`bucket7`.`scope_1`.`coll_4`', '`bucket7`.`scope_1`.`coll_3`', '`bucket7`.`scope_1`.`coll_2`', '`bucket7`.`scope_1`.`coll_1`', '`bucket7`.`scope_1`.`coll_0`', '`bucket7`.`scope_0`.`coll_4`', '`bucket7`.`scope_0`.`coll_3`', '`bucket7`.`scope_0`.`coll_2`', '`bucket7`.`scope_0`.`coll_1`', '`bucket7`.`scope_0`.`coll_0`', '`bucket7`.`_default`.`_default`', '`bucket7`.`_system`.`_query`', '`bucket7`.`_system`.`_mobile`'] c2023-11-14 18:48:31,941 - indexmanager - INFO - Rest URL is http://172.23.97.74:8091/pools/default  2023-11-14 18:48:32,028 - indexmanager - INFO - Node map is [{'hostname': '172.23.120.58', 'services': ['index'], 'memUsage': 13.5, 'cpuUsage': 33.72, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.73', 'services': ['kv'], 'memUsage': 33.09, 'cpuUsage': 42.04, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.74', 'services': ['cbas'], 'memUsage': 46.69, 'cpuUsage': 3.03, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.77', 'services': ['kv'], 'memUsage': 35.88, 'cpuUsage': 40.98, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.81', 'services': ['eventing'], 'memUsage': 7.36, 'cpuUsage': 3.44, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.120.86', 'services': ['kv'], 'memUsage': 35.45, 'cpuUsage': 43.94, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.121.77', 'services': ['kv'], 'memUsage': 32.11, 'cpuUsage': 43.67, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.25', 'services': ['kv'], 'memUsage': 33.86, 'cpuUsage': 40.16, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.26', 'services': ['kv'], 'memUsage': 35.65, 'cpuUsage': 46.3, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.31', 'services': ['index'], 'memUsage': 16.02, 'cpuUsage': 24.66, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.32', 'services': ['kv'], 'memUsage': 35.89, 'cpuUsage': 41.72, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.123.33', 'services': ['backup'], 'memUsage': 8.64, 'cpuUsage': 4.09, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.122', 'services': ['fts'], 'memUsage': 31.42, 'cpuUsage': 6.5, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.14', 'services': ['kv'], 'memUsage': 29.44, 'cpuUsage': 37.94, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.243', 'services': ['n1ql'], 'memUsage': 12.78, 'cpuUsage': 22.3, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.254', 'services': ['index'], 'memUsage': 16.52, 'cpuUsage': 28.71, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.96.48', 'services': ['eventing'], 'memUsage': 7.65, 'cpuUsage': 2.7, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.105', 'services': ['n1ql'], 'memUsage': 12.41, 'cpuUsage': 22.27, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.110', 'services': ['kv'], 'memUsage': 30.94, 'cpuUsage': 44.0, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.112', 'services': ['index'], 'memUsage': 17.1, 'cpuUsage': 34.51, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.148', 'services': ['fts'], 'memUsage': 41.03, 'cpuUsage': 8.8, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.149', 'services': ['cbas'], 'memUsage': 9.29, 'cpuUsage': 2.79, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.241', 'services': ['kv'], 'memUsage': 33.64, 'cpuUsage': 41.17, 'status': 'healthy', 'clusterMembership': 'active'}, {'hostname': '172.23.97.74', 'services': ['kv'], 'memUsage': 37.58, 'cpuUsage': 42.76, 'status': 'healthy', 'clusterMembership': 'active'}] w2023-11-14 18:48:32,028 - indexmanager - INFO - URL used for get_index_map is http://172.23.120.58:9102/getIndexStatus 2023-11-14 18:48:32,223 - indexmanager - INFO - Item count for index idx2_31i2 on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,288 - indexmanager - INFO - Item count for index idx1_uDdsNYN7 on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,305 - indexmanager - INFO - Item count for index idx2_31i2 (replica 1) on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,349 - indexmanager - INFO - Item count for index idx5_QrdvLq0p (replica 1) on `bucket7`.`scope_0`.`coll_3` is 5250. Pending Mutations = 0 Total items in collection are 26667 2023-11-14 18:48:32,368 - indexmanager - INFO - Item count for index idx1_uDdsNYN7 (replica 2) on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,401 - indexmanager - INFO - Item count for index idx3_UWyj58nP on `bucket7`.`scope_1`.`coll_2` is 5250. Pending Mutations = 0 Total items in collection are 5439 2023-11-14 18:48:32,418 - indexmanager - INFO - Item count for index idx3_fIQUJ4O (replica 2) on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,435 - indexmanager - INFO - Item count for index idx3_fIQUJ4O (replica 3) on `bucket7`.`scope_1`.`coll_0` is 5250. Pending Mutations = 0 Total items in collection are 6548 2023-11-14 18:48:32,466 - indexmanager - INFO - Item count for index idx3_UWyj58nP (replica 1) on `bucket7`.`scope_1`.`coll_2` is 5250. Pending Mutations = 0 Total items in collection are 5439 2023-11-14 18:48:32,516 - indexmanager - INFO - Item count for index idx5_QrdvLq0p (replica 2) on `bucket7`.`scope_0`.`coll_3` is 5250. Pending Mutations = 0 Total items in collection are 26667 #Traceback (most recent call last): 2 File "/indexmanager.py", line 1648, in 4 indexMgr.item_count_check(indexMgr.sample_size) 9 File "/indexmanager.py", line 910, in item_count_check ^ raise Exception("There were errors in the item count check phase - \n{0}".format(errors)) >Exception: There were errors in the item count check phase - [{'type': 'item_count_check_failed', 'index_name': 'idx2_31i2', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx1_uDdsNYN7', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx2_31i2 (replica 1)', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx5_QrdvLq0p (replica 1)', 'keyspace': '`bucket7`.`scope_0`.`coll_3`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 26667}, {'type': 'item_count_check_failed', 'index_name': 'idx1_uDdsNYN7 (replica 2)', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx3_UWyj58nP', 'keyspace': '`bucket7`.`scope_1`.`coll_2`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 5439}, {'type': 'item_count_check_failed', 'index_name': 'idx3_fIQUJ4O (replica 2)', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx3_fIQUJ4O (replica 3)', 'keyspace': '`bucket7`.`scope_1`.`coll_0`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 6548}, {'type': 'item_count_check_failed', 'index_name': 'idx3_UWyj58nP (replica 1)', 'keyspace': '`bucket7`.`scope_1`.`coll_2`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 5439}, {'type': 'item_count_check_failed', 'index_name': 'idx5_QrdvLq0p (replica 2)', 'keyspace': '`bucket7`.`scope_0`.`coll_3`', 'index_item_count': 5250, 'index_pending_mutations': 0, 'kv_item_count': 26667}] [pull] sequoiatools/indexmanager [2023-11-14T18:48:40-08:00, sequoiatools/indexmanager:c770fc] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket4 -a drop_all_indexes [pull] sequoiatools/indexmanager [2023-11-14T18:49:38-08:00, sequoiatools/indexmanager:c9c373] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a drop_all_indexes [pull] sequoiatools/indexmanager [2023-11-14T18:50:57-08:00, sequoiatools/indexmanager:3b62dd] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a drop_all_indexes [pull] sequoiatools/indexmanager [2023-11-14T18:52:14-08:00, sequoiatools/indexmanager:5244ae] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a drop_all_indexes [pull] sequoiatools/indexmanager [2023-11-14T18:53:23-08:00, sequoiatools/indexmanager:efccaf] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket8 -a drop_all_indexes [pull] sequoiatools/indexmanager [2023-11-14T18:54:51-08:00, sequoiatools/indexmanager:e0335e] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket9 -a drop_all_indexes [pull] sequoiatools/ftsindexmanager [2023-11-14T18:56:57-08:00, sequoiatools/ftsindexmanager:478320] -n 172.23.96.122 -o 8091 -u Administrator -p password -b default -a item_count_check -vt 2400 [pull] sequoiatools/cmd [2023-11-14T18:57:08-08:00, sequoiatools/cmd:a931ea] 600 [pull] sequoiatools/ftsindexmanager [2023-11-14T19:07:16-08:00, sequoiatools/ftsindexmanager:4a2851] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket4 -a delete_all_indexes [pull] sequoiatools/cmd [2023-11-14T19:07:24-08:00, sequoiatools/cmd:bd3cca] 600 [pull] sequoiatools/ftsindexmanager [2023-11-14T19:17:31-08:00, sequoiatools/ftsindexmanager:5d9054] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket5 -a delete_all_indexes [pull] sequoiatools/cmd [2023-11-14T19:17:40-08:00, sequoiatools/cmd:f2fb4e] 600 [pull] sequoiatools/ftsindexmanager [2023-11-14T19:27:47-08:00, sequoiatools/ftsindexmanager:dcb4cd] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket6 -a delete_all_indexes [pull] sequoiatools/cmd [2023-11-14T19:27:56-08:00, sequoiatools/cmd:9f9de9] 600 [pull] sequoiatools/ftsindexmanager [2023-11-14T19:38:03-08:00, sequoiatools/ftsindexmanager:27d758] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket7 -a delete_all_indexes [pull] sequoiatools/cmd [2023-11-14T19:38:14-08:00, sequoiatools/cmd:7462d3] 600 [pull] sequoiatools/cmd [2023-11-14T19:48:21-08:00, sequoiatools/cmd:eecb05] 1200 [pull] sequoiatools/indexmanager [2023-11-14T20:08:28-08:00, sequoiatools/indexmanager:9f3058] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket4 -a drop_udf [pull] sequoiatools/cmd [2023-11-14T20:10:08-08:00, sequoiatools/cmd:b5a855] 600 ########## Cluster config ################## ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 11 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### → remove ada7da → remove c22bb7 → remove c7702b → remove d7a0ae → remove fb78fa → remove ebe4e1 → remove 4e5491 → remove b208d7 → remove a386fb → remove 9a3f11 → remove e8f854 → remove c3ade3 → remove c608f1 → remove 405c01 → remove 911b60 → remove b8abca → remove 399f95 → remove f7a449 → remove 8d0d51 → remove c95c06 → remove 9c228d → remove 655b30 → remove 53a53f → remove 50b43c → remove 5d4a52 → remove c2dd54 → remove 92d602 → remove bd2083 → remove ccb7ee → remove 0f0d3d → remove 0a32c9 → remove 4400c9 → remove 3e13f7 → remove 097d98 → remove d15f7d → remove 3b1755 → remove 1bd0f6 → remove ccd5d1 → remove 8dcd7d → remove 46e71f → remove cecb42 → remove 1523b1 → remove b7994f → remove d6a8d9 → remove ef9085 → remove 260c9a → remove 948b94 → remove e6b196 → remove 200626 → remove 2db096 → remove 68552c → remove 63d110 → remove 8a3b0e → remove f878fd → remove 1af1ad → remove 593a8e → remove efe053 → remove 065730 → remove 3bd56d → remove 98547a → remove 5134a8 → remove bc2cbf → remove 10e44e → remove 85a2f1 → remove 340372 → remove 1dc76b → remove 4b5d4b → remove 2753c3 → remove 102877 → remove 45f0c2 → remove 78a509 → remove 288cf5 → remove 0c3cae → remove 8d0984 → remove f8f91c → remove f79e19 → remove 7b1d13 → remove d865e4 → remove f03b77 → remove 71b681 → remove 007400 → remove 19a327 → remove 852daa → remove 128995 → remove bffac6 → remove a4957e → remove cba710 → remove 9fa9c6 → remove 58a6e9 → remove f6aba1 → remove e4b71a → remove be9ac4 → remove be5c1b → remove 2c0810 → remove e73e3c → remove 368765 → remove 492c59 → remove b83a79 → remove f45fe2 → remove b1442b → remove e61e79 → remove 4610dc → remove a0b19c → remove 82511d → remove 5ee66f → remove b8ef52 → remove c44ed8 → remove 0daa2e → remove e7f375 → remove b453b9 → remove a358e3 → remove e684a4 → remove 38e662 → remove 16d869 → remove f9f293 → remove a087c7 → remove 007786 → remove c58d26 → remove b4470f → remove 52c960 → remove 99ea26 → remove 5c62be → remove 2df561 → remove 6ccee0 → remove 0e25de → remove 0b61b2 → remove 2e9b64 → remove 9fa1de → remove 002960 → remove 7a870c → remove 6b3d39 → remove 608cd2 → remove 65f0b8 → remove 086ae0 → remove 4e6689 → remove 19230e → remove 0770f4 → remove ea641c → remove b3d643 → remove 87ca3c → remove 55c804 → remove 314c0d → remove 7dad56 → remove 87165a → remove 899dd0 → remove ad2285 → remove 032dab → remove 253755 → remove 87d3d1 → remove 5626a3 → remove 7f787a → remove 5df5a5 → remove ec0e12 → remove 970fc5 → remove 46b539 → remove 23ab69 → remove 1012e0 → remove c2bb06 → remove 27912d → remove 68aeab → remove a200d5 → remove 092edc → remove 1b0628 → remove 900a2a → remove 2ee2b2 → remove 0e3945 → remove 60d857 → remove 6e1e47 → remove 729b6c → remove ea6cc8 → remove e70421 → remove 6386df → remove 737d67 → remove d863f2 → remove ec2090 → remove 20c1de → remove 7f3e2a → remove b94f56 → remove 6d497e → remove 27be7a → remove c29686 → remove 81ffbf → remove 875825 → remove 644ec6 → remove 4cd47d → remove e42b32 → remove ed81ba → remove 22e65d → remove 48e606 → remove b862f3 → remove 75f42c → remove 305b81 → remove a83db9 → remove 371f8b → remove 9bd792 → remove d1ed83 → remove f1d4f6 → remove a52431 → remove d1216d → remove a4a752 → remove 005f0d → remove 5da51c → remove db06a6 → remove 3a687e → remove 046c19 → remove def2e8 → remove 648946 → remove f98dd4 → remove 823d3a → remove 79a8f4 → remove c15c24 → remove 807216 → remove 0271d6 → remove 95836e → remove 356f32 → remove 6bdc38 → remove fb6438 → remove d6af86 → remove ad32ff → remove cb44d2 → remove 9c4739 → remove 708aab → remove 97d294 → remove eff7d6 → remove d5adce → remove f5fb4e → remove 8bb1d0 → remove 8037ed → remove 9e2726 → remove 4f7b71 → remove 5533b4 → remove 36fa93 → remove aa5379 → remove 113dcd → remove b28d86 → remove e2fe5d → remove 641966 → remove 48e98c → remove f09ec8 → remove 12a041 → remove 23e928 → remove 21d6ce → remove 9edb23 → remove 62d03c → remove fb1e4e → remove a98b11 → remove 57bfb4 → remove 7b1605 → remove 300ee1 → remove 16aee7 → remove 7c3a14 → remove f081bc → remove f83530 → remove ae9cd6 → remove d30973 → remove 22de03 → remove ebf9e8 → remove 2d1179 → remove 54d566 → remove b91ded → remove 040f3d → remove 3b563c → remove 589a7c → remove 8ceeba → remove 68eb25 → remove 4e1f02 → remove 94b55f → remove 6be1e5 → remove 3d6686 → remove 790a45 → remove efaff4 → remove 076d38 → remove 276f97 → remove af0835 → remove 8b39fc → remove 9a09bb → remove c8f814 → remove 391eaa → remove 6ce02b → remove ca72aa → remove 627d36 → remove 3b323d → remove 1cabdf → remove 861b8f → remove 47a54a → remove 37eae5 → remove 23728c → remove 2dc22f → remove a01b4c → remove 71dde7 → remove 4d16dc → remove 0ef965 → remove 5bc2dd → remove eed9a1 → remove cae3ce → remove 7b752b → remove d124e9 → remove 4b5795 → remove 3e02e4 → remove 3be15e → remove be2b62 → remove 72dc2d → remove 4e3bb1 → remove b78850 → remove 690d0d → remove ea82a7 → remove 97f0fe → remove 552faa → remove 5b11ae → remove 95f840 → remove dcb673 → remove 8b4873 → remove b84260 → remove d324a6 → remove 45e460 → remove 1f3db6 → remove bbcf10 → remove fd7fd2 → remove a659d2 → remove 64f45b → remove da9e16 → remove e4c430 → remove 45b042 → remove 24f036 → remove fc3999 → remove 692649 → remove fc8887 → remove df931d → remove 5f4c99 → remove 0cb61b → remove 63462c → remove 02d4f7 → remove e05020 → remove aa91dc → remove ea64b1 → remove fac369 → remove 78c8c5 → remove 2276a0 → remove 58fda9 → remove 25e7f6 → remove 5197a9 → remove 99b766 → remove f25ed2 → remove c770fc → remove c9c373 → remove 3b62dd → remove 5244ae → remove efccaf → remove e0335e → remove 478320 → remove a931ea → remove 4a2851 → remove bd3cca → remove 5d9054 → remove f2fb4e → remove dcb4cd → remove 9f9de9 → remove 27d758 → remove 7462d3 → remove eecb05 → remove 9f3058 → remove b5a855 Test cycle: 1 ended after 42540 seconds Test cycle started: 2 → parsed tests/templates/rebalance.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/kv.yml → parsed tests/templates/fts.yml → parsed tests/templates/n1ql.yml → parsed tests/templates/multinode_failure.yml → parsed tests/templates/collections.yml [pull] appropriate/curl [2023-11-14T20:20:34-08:00, appropriate/curl:22255f] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.120.58:9102/settings -d {"indexer.plasma.backIndex.enablePageBloomFilter":true} [pull] appropriate/curl [2023-11-14T20:20:44-08:00, appropriate/curl:67af95] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.120.58:9102/settings -d {"indexer.build.enableOSO":true} [pull] appropriate/curl [2023-11-14T20:20:51-08:00, appropriate/curl:586c1c] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.120.58:9102/settings -d {"indexer.settings.rebalance.redistribute_indexes":true} [pull] appropriate/curl [2023-11-14T20:20:56-08:00, appropriate/curl:3833bd] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.96.122:8094/api/managerOptions -d {"bleveMaxResultWindow":"100000"} [pull] appropriate/curl [2023-11-14T20:21:01-08:00, appropriate/curl:7897d4] -X PUT -u Administrator:password -H Content-Type:application/json http://172.23.96.122:8094/api/managerOptions -d {"bleveMaxClauseCount":"2500"} [pull] appropriate/curl [2023-11-14T20:21:06-08:00, appropriate/curl:afbe51] -X POST -u Administrator:password -H Content-Type:application/json http://172.23.97.74:8091/_p/backup/api/v1/plan/my_plan -d {"name":"my_plan","description":"This plan does backups every 2 days","services":["data","gsi","views","ft","eventing","cbas","query"],"default":false,"tasks":[{"name":"backup-1","task_type":"BACKUP","schedule":{"job_type":"BACKUP","frequency":24,"period":"HOURS","start_now":false},"full_backup":true},{"name":"merge","task_type":"MERGE","schedule":{"job_type":"MERGE","frequency":2,"period":"DAYS","time":"12:00","start_now":false},"merge_options":{"offset_start":0,"offset_end":2},"full_backup":true}]} [pull] appropriate/curl [2023-11-14T20:21:14-08:00, appropriate/curl:a2f684] -u Administrator:password -X POST http://172.23.97.74:8091/_p/backup/api/v1/cluster/self/repository/active/my_repo -H Content-Type:application/json -d {"plan": "my_plan", "archive": "/data/archive", "bucket_name":"bucket5"} [pull] sequoiatools/cmd [2023-11-14T20:21:21-08:00, sequoiatools/cmd:e87f9a] 300 [pull] sequoiatools/transactions [2023-11-14T20:26:29-08:00, sequoiatools/transactions:530809] 172.23.97.74 default 1000 [pull] sequoiatools/collections:1.0 [2023-11-14T20:26:35-08:00, sequoiatools/collections:1.0:96a8a9] -i 172.23.97.74:8091 -b bucket8 -o crud_mode --crud_interval=120 --max_scopes=10 --max_collections=100 [pull] sequoiatools/collections:1.0 [2023-11-14T20:26:39-08:00, sequoiatools/collections:1.0:b7a314] -i 172.23.97.74:8091 -b bucket9 -o crud_mode --crud_interval=120 --max_scopes=10 --max_collections=100 [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:26:44-08:00, sequoiatools/pillowfight:7.0:900614] -U couchbase://172.23.97.74/default?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:26:49-08:00, sequoiatools/pillowfight:7.0:3cb3f1] -U couchbase://172.23.97.74/WAREHOUSE?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:26:54-08:00, sequoiatools/pillowfight:7.0:93bb0c] -U couchbase://172.23.97.74/NEW_ORDER?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:26:59-08:00, sequoiatools/pillowfight:7.0:7cb185] -U couchbase://172.23.97.74/ITEM?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:04-08:00, sequoiatools/pillowfight:7.0:9f5bb7] -U couchbase://172.23.97.74/bucket4?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:09-08:00, sequoiatools/pillowfight:7.0:dca3f3] -U couchbase://172.23.97.74/bucket5?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:14-08:00, sequoiatools/pillowfight:7.0:7fdd08] -U couchbase://172.23.97.74/bucket6?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:19-08:00, sequoiatools/pillowfight:7.0:68892c] -U couchbase://172.23.97.74/bucket7?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:24-08:00, sequoiatools/pillowfight:7.0:8830c5] -U couchbase://172.23.97.74/bucket8?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/pillowfight:7.0 [2023-11-14T20:27:29-08:00, sequoiatools/pillowfight:7.0:6c7364] -U couchbase://172.23.97.74/bucket9?select_bucket=true -M 512 -I 2000 -B 200 -t 1 --rate-limit 1000 -P password --durability majority -c -1 --json [pull] sequoiatools/cmd [2023-11-14T20:27:34-08:00, sequoiatools/cmd:77dd97] 600 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T20:38:03-08:00, sequoiatools/couchbase-cli:7.6:d0774d] rebalance -c 172.23.97.74:8091 --server-remove 172.23.96.14:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T20:47:29-08:00, sequoiatools/cmd:e44940] 60 [pull] sequoiatools/cmd [2023-11-14T20:48:36-08:00, sequoiatools/cmd:73c0a8] 600 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/collections:1.0 [2023-11-14T20:58:45-08:00, sequoiatools/collections:1.0:a8ca16] -i 172.23.97.74:8091 -b default -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T20:58:53-08:00, sequoiatools/collections:1.0:0f91a3] -i 172.23.97.74:8091 -b WAREHOUSE -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T20:59:01-08:00, sequoiatools/collections:1.0:efece6] -i 172.23.97.74:8091 -b NEW_ORDER -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/collections:1.0 [2023-11-14T20:59:09-08:00, sequoiatools/collections:1.0:e6c35f] -i 172.23.97.74:8091 -b ITEM -o create_multi_scope_collection -s event_ -c coll --scope_count=1 --collection_count=4 --collection_distribution=uniform [pull] sequoiatools/eventing:7.0 [2023-11-14T20:59:31-08:00, sequoiatools/eventing:7.0:ada619] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll0 -d dst_bucket.NEW_ORDER.event_0.coll0.rw -t timers -o create --name timers [pull] sequoiatools/eventing:7.0 [2023-11-14T20:59:40-08:00, sequoiatools/eventing:7.0:554ea8] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -s default.event_0.coll0 -m ITEM.event_0.coll1 -d dst_bucket.NEW_ORDER.event_0.coll1.rw -t n1ql -o create --name n1ql [pull] sequoiatools/eventing:7.0 [2023-11-14T20:59:49-08:00, sequoiatools/eventing:7.0:4332c7] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll2 -d dst_bucket.WAREHOUSE.event_0.coll0.rw -t sbm -o create --name sbm [pull] sequoiatools/eventing:7.0 [2023-11-14T20:59:58-08:00, sequoiatools/eventing:7.0:761b62] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -s WAREHOUSE.event_0.coll0 -m ITEM.event_0.coll3 -d dst_bucket.NEW_ORDER.event_0.coll2.rw -t curl -o create --name curl [pull] sequoiatools/eventing:7.0 [2023-11-14T21:00:08-08:00, sequoiatools/eventing:7.0:93535d] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o deploy [pull] sequoiatools/eventing:7.0 [2023-11-14T21:00:11-08:00, sequoiatools/eventing:7.0:8b0937] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state deployed ########## Cluster config ################## ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### Test cycle: 1 ended after 160 seconds → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/analyticsmanager:1.0 [2023-11-14T21:01:28-08:00, sequoiatools/analyticsmanager:1.0:9b51a7] -i 172.23.120.74 -b bucket4,bucket5,bucket6,bucket7 -o create_cbas_infra --dv_cnt 4 --ds_cnt 10 --idx_cnt 4 --data_src catapult --syn_cnt 10 -w false --ingestion_timeout 3600 --ds_without_where 2 --api_timeout 3600 [pull] sequoiatools/analyticsmanager:1.0 [2023-11-14T21:02:04-08:00, sequoiatools/analyticsmanager:1.0:be8eb4] -i 172.23.120.74 -b default,WAREHOUSE -o create_cbas_infra --exc_coll _default --dv_cnt 4 --ds_cnt 10 --idx_cnt 4 --data_src gideon --syn_cnt 10 -w false --ingestion_timeout 3600 --ds_without_where 2 --api_timeout 3600 [pull] sequoiatools/cmd [2023-11-14T21:02:41-08:00, sequoiatools/cmd:9741f6] 60 ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.73:8091 172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### Test cycle: 1 ended after 141 seconds [pull] sequoiatools/indexmanager [2023-11-14T21:03:49-08:00, sequoiatools/indexmanager:2bc86b] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket4 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T21:04:18-08:00, sequoiatools/indexmanager:060dfd] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T21:04:49-08:00, sequoiatools/indexmanager:e0757f] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T21:05:18-08:00, sequoiatools/indexmanager:c7d925] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a create_udf --num_udf_per_scope=10 [pull] sequoiatools/indexmanager [2023-11-14T21:05:47-08:00, sequoiatools/indexmanager:d781c1] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T21:06:39-08:00, sequoiatools/indexmanager:6620e8] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T21:07:53-08:00, sequoiatools/indexmanager:af07c9] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -i 1 -a create_index [pull] sequoiatools/indexmanager [2023-11-14T21:08:59-08:00, sequoiatools/indexmanager:ea8a5d] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket5 -a build_deferred_index -m 2 [pull] sequoiatools/indexmanager [2023-11-14T21:09:49-08:00, sequoiatools/indexmanager:a96d68] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket6 -a build_deferred_index -m 2 [pull] sequoiatools/indexmanager [2023-11-14T21:10:39-08:00, sequoiatools/indexmanager:96aeb3] -n 172.23.97.74 -o 8091 -u Administrator -p password -b bucket7 -a build_deferred_index -m 2 [pull] sequoiatools/wait_for_idx_build_complete [2023-11-14T21:11:31-08:00, sequoiatools/wait_for_idx_build_complete:e34b65] 172.23.120.58 Administrator password [pull] sequoiatools/ftsindexmanager [2023-11-14T21:12:39-08:00, sequoiatools/ftsindexmanager:7738a6] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket4 -m 1:1:2 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T21:12:47-08:00, sequoiatools/cmd:9be395] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T21:18:00-08:00, sequoiatools/ftsindexmanager:31dc79] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket5 -m 1:0:5 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T21:18:08-08:00, sequoiatools/cmd:f99ac8] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T21:23:20-08:00, sequoiatools/ftsindexmanager:8b51da] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket6 -m 1:1:1,1:1:2 -s 1 -a create_index_from_map [pull] sequoiatools/cmd [2023-11-14T21:23:30-08:00, sequoiatools/cmd:4dbef4] 300 [pull] sequoiatools/ftsindexmanager [2023-11-14T21:28:43-08:00, sequoiatools/ftsindexmanager:fed79f] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket7 -m 1:0:2 -s 1 -a create_index_from_map [pull] sequoiatools/gideon2 [2023-11-14T21:28:51-08:00, sequoiatools/gideon2:8f3818] kv --ops 150 --create 80 --delete 20 --get 82 --sizes 64 96 --expire 100 --ttl 3600 --hosts 172.23.97.74 --bucket default --scope event_0 --collection coll0 [pull] sequoiatools/gideon2 [2023-11-14T21:28:56-08:00, sequoiatools/gideon2:2ca264] kv --ops 150 --create 80 --delete 20 --get 82 --sizes 64 96 --expire 100 --ttl 3600 --hosts 172.23.97.74 --bucket WAREHOUSE --scope event_0 --collection coll0 [pull] sequoiatools/catapult [2023-11-14T21:29:01-08:00, sequoiatools/catapult:6712fb] -i 172.23.97.74 -u Administrator -p password -b bucket4 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T21:29:06-08:00, sequoiatools/catapult:4049c0] -i 172.23.97.74 -u Administrator -p password -b bucket5 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T21:29:11-08:00, sequoiatools/catapult:54ffff] -i 172.23.97.74 -u Administrator -p password -b bucket6 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/catapult [2023-11-14T21:29:16-08:00, sequoiatools/catapult:055b16] -i 172.23.97.74 -u Administrator -p password -b bucket7 -n 7000 -pc 100 -pu 25 -pd 25 -dt Hotel -de 7200 -ds 1000 -lf True -li 300 -fu price,free_parking -ac True --num_threads 1 [pull] sequoiatools/queryapp [2023-11-14T21:29:22-08:00, sequoiatools/queryapp:43786a] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket4 --querycount 1 --threads 1 --n1ql True --query_timeout=600 --scan_consistency NOT_BOUNDED --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T21:29:27-08:00, sequoiatools/queryapp:d907fa] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket5 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T21:29:32-08:00, sequoiatools/queryapp:2b87bd] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket6 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --collections_mode --dataset hotel [pull] sequoiatools/queryapp [2023-11-14T21:29:37-08:00, sequoiatools/queryapp:d0d199] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.61/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.96.243 --port 8093 --duration 0 --print_duration=3600 --bucket bucket7 --querycount 3 --threads 3 --n1ql True --query_timeout=600 --scan_consistency REQUEST_PLUS --bucket_names [bucket4,bucket5,bucket6,bucket7] --txns True --dataset hotel [pull] sequoiatools/ftsindexmanager [2023-11-14T21:29:42-08:00, sequoiatools/ftsindexmanager:1ab5c1] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket4 --print_interval 600 -a run_queries -t 0 -nq 1 [pull] sequoiatools/ftsindexmanager [2023-11-14T21:29:47-08:00, sequoiatools/ftsindexmanager:f72cff] -n 172.23.96.122 -o 8091 -u Administrator -p password -b bucket5 --print_interval 600 -a run_flex_queries -t 0 -nq 1 [pull] sequoiatools/cmd [2023-11-14T21:29:52-08:00, sequoiatools/cmd:7ca848] 600 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T21:40:51-08:00, sequoiatools/couchbase-cli:7.6:3de0e2] server-add -c 172.23.97.74:8091 --server-add https://172.23.96.14 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T21:41:08-08:00, sequoiatools/couchbase-cli:7.6:bfef39] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.73 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T22:06:30-08:00, sequoiatools/cmd:a3fb80] 60 → Error occurred on container - sequoiatools/collections:1.0:[-i 172.23.97.74:8091 -b bucket9 -o crud_mode --crud_interval=120 --max_scopes=10 --max_collections=100] docker logs b7a314 docker start b7a314 Parsed arguments are:{'host': '172.23.97.74:8091', 'username': 'Administrator', 'password': 'password', 'bucket': 'bucket9', 'operations': 'crud_mode', 'scopename': None, 'collectionname': None, 'count': 1, 'scope_count': 1, 'collection_count': 1, 'collection_distribution': 'uniform', 'max_scopes': 10, 'max_collections': 100, 'crud_timeout': 0, 'crud_interval': 120, 'ignore_scope': [], 'ignore_coll': [], 'capella': False, 'tls': False} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37  Collection : Operation = create GCreating Collection : IpmTKxZA8CpBCSSK in scope pn7T508ZemkL1jSSHlOhsq Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 38 Collection : Operation = drop 5Deleting collection from scope ciqNVc6PdeSSBtZyRKH4H BDeleting Collection : lrujDJqt4PZ1 in scope ciqNVc6PdeSSBtZyRKH4H {'uid': '194'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:28:39 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37 Collection : Operation = drop -Deleting collection from scope A02Db7dYCIOeO Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37  Collection : Operation = create BCreating Collection : pELyEb2RSjapdcqPweJ9nPBSed in scope scope_0 Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 38 Collection : Operation = drop 9Deleting collection from scope 3fCFas8AFeJSELrtFHVSUPlwH GDeleting Collection : 6Q651zGeFGOmX in scope 3fCFas8AFeJSELrtFHVSUPlwH {'uid': '196'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:34:39 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37  Collection : Operation = create ECreating Collection : t1AB1oanhMpExLvxqUqEcaWkKeKZ in scope _default Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 38 Collection : Operation = drop 6Deleting collection from scope q9TPG6Q7dDTLjMzYiKtKB4 JDeleting Collection : S9bzKZW1K6LvamYwA7O in scope q9TPG6Q7dDTLjMzYiKtKB4 {'uid': '198'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:38:39 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37 Collection : Operation = drop 5Deleting collection from scope ehPBbBICAWLyGukWIcFZl @Deleting Collection : vHFAGuaFj9 in scope ehPBbBICAWLyGukWIcFZl {'uid': '199'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:40:40 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 36  Collection : Operation = create SCreating Collection : pXDh0ZesIlhRaROmKWo8kX2g7QieT in scope ehPBbBICAWLyGukWIcFZl Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', 'cn4oAvYl4iTWn', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 9, collections = 37 Scope : Operation = drop Deleting Scope : cn4oAvYl4iTWn {'uid': '19b'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:44:40 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 36 Collection : Operation = drop (Deleting collection from scope _default ;Deleting Collection : WnqAh8ATdur8JRRgP7 in scope _default {'uid': '19c'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 04:46:41 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 35  Collection : Operation = create 0Creating Collection : ShzSSP30 in scope scope_0 Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 36  Collection : Operation = create RCreating Collection : b2ghhZ1uQix1NtvQk8zSbkdmAr8F in scope ciqNVc6PdeSSBtZyRKH4H Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 37 Collection : Operation = drop -Deleting collection from scope A02Db7dYCIOeO Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 37  Collection : Operation = create NCreating Collection : VEWdejjYdj7cByPqZfeaIhw in scope q9TPG6Q7dDTLjMzYiKtKB4 Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 38  Collection : Operation = create >Creating Collection : GCPoSu9v in scope ehPBbBICAWLyGukWIcFZl Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 39  Collection : Operation = create @Creating Collection : pLisQ0Xef0ohzLA8g7 in scope A02Db7dYCIOeO Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 40  Collection : Operation = create PCreating Collection : LqssLk3y1gY9qDDG0pu112 in scope 3fCFas8AFeJSELrtFHVSUPlwH Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 41  Collection : Operation = create CCreating Collection : YUE0qMNUrXt0 in scope pn7T508ZemkL1jSSHlOhsq Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 42 Collection : Operation = drop 6Deleting collection from scope q9TPG6Q7dDTLjMzYiKtKB4 QDeleting Collection : VTrXLtMPaK3jRED2nRU8TTlcz7 in scope q9TPG6Q7dDTLjMzYiKtKB4 {'uid': '1a4'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 05:04:44 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 41 Collection : Operation = drop 5Deleting collection from scope ciqNVc6PdeSSBtZyRKH4H JDeleting Collection : byxEIwnglvrUwPiEYtI2 in scope ciqNVc6PdeSSBtZyRKH4H {'uid': '1a5'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 05:06:44 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 40  Collection : Operation = create ;Creating Collection : isTmullPzMBe7oU2Pc in scope _default Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 41 Collection : Operation = drop 6Deleting collection from scope pn7T508ZemkL1jSSHlOhsq CDeleting Collection : YUE0qMNUrXt0 in scope pn7T508ZemkL1jSSHlOhsq {'uid': '1a7'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 05:10:44 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 40 Collection : Operation = drop 5Deleting collection from scope ciqNVc6PdeSSBtZyRKH4H RDeleting Collection : b2ghhZ1uQix1NtvQk8zSbkdmAr8F in scope ciqNVc6PdeSSBtZyRKH4H {'uid': '1a8'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 05:12:44 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 39 Collection : Operation = drop 5Deleting collection from scope ehPBbBICAWLyGukWIcFZl SDeleting Collection : pXDh0ZesIlhRaROmKWo8kX2g7QieT in scope ehPBbBICAWLyGukWIcFZl {'uid': '1a9'} {'cache-control': 'no-cache,no-store,must-revalidate', 'content-length': '13', 'content-type': 'application/json', 'date': 'Wed, 15 Nov 2023 05:14:44 GMT', 'expires': 'Thu, 01 Jan 1970 00:00:00 GMT', 'pragma': 'no-cache', 'server': 'Couchbase Server', 'x-content-type-options': 'nosniff', 'x-frame-options': 'DENY', 'x-permitted-cross-domain-policies': 'none', 'x-xss-protection': '1; mode=block', 'status': '200'} Curr_scope_list: ['pn7T508ZemkL1jSSHlOhsq', 'A02Db7dYCIOeO', 'ehPBbBICAWLyGukWIcFZl', 'q9TPG6Q7dDTLjMzYiKtKB4', 'ciqNVc6PdeSSBtZyRKH4H', '3fCFas8AFeJSELrtFHVSUPlwH', 'scope_0', '_default'] 0Existing number of scopes = 8, collections = 38 Collection : Operation = drop (Deleting collection from scope _default   collection_ops.run() . File "collectionsUtil.py", line 134, in run  options.ignore_coll) D File "collectionsUtil.py", line 321, in crud_on_scope_collections * self.create_scope(bucket, scope_name) 7 File "collectionsUtil.py", line 196, in create_scope * self.coll_manager.create_scope(scope) _ File "/usr/local/lib/python3.7/site-packages/couchbase/exceptions.py", line 1411, in wrapped ! return func(*args, **kwargs) o File "/usr/local/lib/python3.7/site-packages/couchbase/management/collections.py", line 162, in create_scope F self._admin_bucket.http_request(**forward_args(kwargs, *options)) i File "/usr/local/lib/python3.7/site-packages/couchbase/management/admin.py", line 165, in http_request  timeout=timeout) dcouchbase.exceptions.HTTPException: , Context={'response_code': 404, 'path': '/pools/default/buckets/bucket9/scopes', 'response_body': 'Requested resource not found.\r\n', 'endpoint': '172.23.120.73:8091', 'type': 'HTTPErrorContext'}, Tracing Output={":nokey:0": null}> [pull] sequoiatools/cmd [2023-11-14T22:07:38-08:00, sequoiatools/cmd:f70024] 600 → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/queryapp [2023-11-14T22:18:04-08:00, sequoiatools/queryapp:0f4f6e] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.21/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.120.74 --port 8095 --duration 0 --bucket bucket4 --querycount 50 -a True --analytics_queries catapult_queries --query_timeout 3600 -B [bucket4,bucket5,bucket6,bucket7] [pull] sequoiatools/queryapp [2023-11-14T22:18:10-08:00, sequoiatools/queryapp:6a4bf2] -J-Xms256m -J-Xmx512m -J-cp /AnalyticsQueryApp/Couchbase-Java-Client-2.7.21/* /AnalyticsQueryApp/Query/load_queries.py --server_ip 172.23.120.74 --port 8095 --duration 0 --bucket default --querycount 50 -a True --analytics_queries gideon_queries --query_timeout 3600 -B [default,WAREHOUSE] [pull] sequoiatools/cmd [2023-11-14T22:18:15-08:00, sequoiatools/cmd:8c3a97] 600 ########## Cluster config ################## ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### eventing : 2 ===== > [172.23.120.81:8091 172.23.96.48:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### Test cycle: 1 ended after 636 seconds [pull] sequoiatools/cbdozer [2023-11-14T22:28:24-08:00, sequoiatools/cbdozer:a18fb3] -method POST -duration 0 -rate 10 -url http://Administrator:password@172.23.97.105:8093:8095/query/service -body delete from default where rating > 0 limit 10 [pull] sequoiatools/gideon [2023-11-14T22:28:28-08:00, sequoiatools/gideon:0ea5cd] kv --ops 500 --create 10 --delete 8 --get 92 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 512 128 1024 2048 16000 [pull] sequoiatools/gideon [2023-11-14T22:28:33-08:00, sequoiatools/gideon:06fc43] kv --ops 500 --create 100 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 64 [pull] sequoiatools/gideon [2023-11-14T22:28:38-08:00, sequoiatools/gideon:3402d1] kv --ops 600 --create 15 --get 80 --delete 5 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 128 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T22:29:19-08:00, sequoiatools/couchbase-cli:7.6:4ba8ff] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.73 -u Administrator -p password --server-add-username Administrator --server-add-password password --services eventing [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T22:29:37-08:00, sequoiatools/couchbase-cli:7.6:626fc4] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T22:37:46-08:00, sequoiatools/cmd:909ce3] 60 [pull] sequoiatools/cmd [2023-11-14T22:38:54-08:00, sequoiatools/cmd:9896b2] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T22:44:19-08:00, sequoiatools/couchbase-cli:7.6:d15bce] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.73 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T22:47:11-08:00, sequoiatools/cmd:22caa3] 60 [pull] sequoiatools/cmd [2023-11-14T22:48:18-08:00, sequoiatools/cmd:ba0daa] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T22:54:17-08:00, sequoiatools/couchbase-cli:7.6:0b327b] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.73 -u Administrator -p password --server-add-username Administrator --server-add-password password --services eventing [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T22:54:35-08:00, sequoiatools/couchbase-cli:7.6:83b7bd] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T22:56:12-08:00, sequoiatools/cmd:fe5a14] 60 ########## Cluster config ################## ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### eventing : 2 ===== > [172.23.120.73:8091 172.23.96.48:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.97.149:8091] ########### ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 1716 seconds [pull] sequoiatools/cmd [2023-11-14T22:57:21-08:00, sequoiatools/cmd:87f117] 600 → parsed tests/analytics/cheshirecat/test_analytics_integration_scale3.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/analytics.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:08:13-08:00, sequoiatools/couchbase-cli:7.6:f3b14f] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.75 -u Administrator -p password --server-add-username Administrator --server-add-password password --services analytics [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:08:31-08:00, sequoiatools/couchbase-cli:7.6:a8246a] rebalance -c 172.23.97.74:8091 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T23:10:12-08:00, sequoiatools/cmd:e8f98b] 60 [pull] sequoiatools/cmd [2023-11-14T23:11:20-08:00, sequoiatools/cmd:c5e3f7] 30 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:12:26-08:00, sequoiatools/couchbase-cli:7.6:67a6bf] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.75 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T23:13:26-08:00, sequoiatools/cmd:d5d8a9] 60 [pull] sequoiatools/cmd [2023-11-14T23:14:34-08:00, sequoiatools/cmd:381c42] 30 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:16:06-08:00, sequoiatools/couchbase-cli:7.6:9b7551] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.75 -u Administrator -p password --server-add-username Administrator --server-add-password password --services analytics [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:16:24-08:00, sequoiatools/couchbase-cli:7.6:2cd605] rebalance -c 172.23.97.74:8091 --server-remove 172.23.97.149 -u Administrator -p password [pull] sequoiatools/cmd [2023-11-14T23:18:03-08:00, sequoiatools/cmd:6ab1d7] 60 [pull] sequoiatools/cmd [2023-11-14T23:19:10-08:00, sequoiatools/cmd:7e18fc] 300 ########## Cluster config ################## ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### eventing : 2 ===== > [172.23.120.73:8091 172.23.96.48:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### Test cycle: 1 ended after 1008 seconds [pull] sequoiatools/cmd [2023-11-14T23:24:18-08:00, sequoiatools/cmd:5a296a] 600 [pull] danihodovic/vegeta [2023-11-14T23:34:49-08:00, danihodovic/vegeta:5cd49b] bash -c echo GET "http://Administrator:password@172.23.97.74:8092/default/_design/scale/_view/stats?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] danihodovic/vegeta [2023-11-14T23:34:54-08:00, danihodovic/vegeta:514824] bash -c echo GET "http://Administrator:password@172.23.96.14:8092/default/_design/scale/_view/array?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] danihodovic/vegeta [2023-11-14T23:34:59-08:00, danihodovic/vegeta:76f686] bash -c echo GET "http://Administrator:password@172.23.97.241:8092/default/_design/scale/_view/padd?limit=10&stale=update_after&connection_timeout=60000" | vegeta attack -duration=0 -rate=10> results.bin && vegeta report -inputs=results.bin > results.txt && vegeta report -inputs=results.bin -reporter=plot > plot.html [pull] appropriate/curl [2023-11-14T23:35:04-08:00, appropriate/curl:356661] -s http://Administrator:password@172.23.97.74:8091/pools/default/remoteClusters → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/eventing:7.0 [2023-11-14T23:35:13-08:00, sequoiatools/eventing:7.0:54cbe2] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o pause [pull] sequoiatools/eventing:7.0 [2023-11-14T23:35:22-08:00, sequoiatools/eventing:7.0:fccfc1] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state paused ########## Cluster config ################## ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 4 ===== > [172.23.120.58:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091] ########### ###### eventing : 2 ===== > [172.23.120.73:8091 172.23.96.48:8091] ########### Test cycle: 1 ended after 37 seconds [pull] sequoiatools/pillowfight:7.0 [2023-11-14T23:35:50-08:00, sequoiatools/pillowfight:7.0:936214] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password → parsed tests/2i/cheshirecat/test_idx_cc_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/n1ql.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:36:36-08:00, sequoiatools/couchbase-cli:7.6:2b9dc6] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.81 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index [pull] sequoiatools/couchbase-cli:7.6 [2023-11-14T23:36:52-08:00, sequoiatools/couchbase-cli:7.6:057e71] rebalance -c 172.23.97.74:8091 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 -u Administrator -p password] docker logs 057e71 docker start 057e71 *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-14T23:57:47-08:00, sequoiatools/cmd:39aa2b] 60 [pull] sequoiatools/cmd [2023-11-14T23:58:54-08:00, sequoiatools/cmd:31b297] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:04:28-08:00, sequoiatools/couchbase-cli:7.6:96df91] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password] docker logs 96df91 docker start 96df91 *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T00:24:53-08:00, sequoiatools/cmd:69dc35] 60 [pull] sequoiatools/cmd [2023-11-15T00:26:01-08:00, sequoiatools/cmd:6f9b0c] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:32:10-08:00, sequoiatools/couchbase-cli:7.6:ea00ee] server-add -c 172.23.97.74:8091 --server-add https://172.23.97.149 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:32:27-08:00, sequoiatools/couchbase-cli:7.6:fb9aec] rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 --server-remove 172.23.120.81 -u Administrator -p password] docker logs fb9aec docker start fb9aec *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T00:32:53-08:00, sequoiatools/cmd:4d08c2] 60 [pull] sequoiatools/cmd [2023-11-15T00:34:00-08:00, sequoiatools/cmd:2fa95e] 300 [pull] sequoiatools/cbq [2023-11-15T00:39:26-08:00, sequoiatools/cbq:15ba03] -e=http://172.23.96.243:8093 -u=Administrator -p=password -script=ALTER INDEX `default`.default_claims WITH {"action":"replica_count","num_replica": 3} [pull] sequoiatools/cmd [2023-11-15T00:39:34-08:00, sequoiatools/cmd:51cdfa] 300 [pull] sequoiatools/wait_for_idx_build_complete [2023-11-15T00:44:46-08:00, sequoiatools/wait_for_idx_build_complete:914e7e] 172.23.120.81 Administrator password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:46:08-08:00, sequoiatools/couchbase-cli:7.6:df524e] failover -c 172.23.97.74:8091 --server-failover 172.23.123.31:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:46:53-08:00, sequoiatools/couchbase-cli:7.6:5c8efc] recovery -c 172.23.97.74:8091 --server-recovery 172.23.123.31:8091 --recovery-type full -u Administrator -p password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:47:01-08:00, sequoiatools/couchbase-cli:7.6:2454b7] rebalance -c 172.23.97.74:8091 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 -u Administrator -p password] docker logs 2454b7 docker start 2454b7 *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T00:47:53-08:00, sequoiatools/cmd:68fac3] 60 [pull] sequoiatools/cmd [2023-11-15T00:49:01-08:00, sequoiatools/cmd:13ae83] 300 [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:54:31-08:00, sequoiatools/couchbase-cli:7.6:b6d3eb] failover -c 172.23.97.74:8091 --server-failover 172.23.120.58:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T00:54:53-08:00, sequoiatools/couchbase-cli:7.6:131f09] rebalance -c 172.23.97.74:8091 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 -u Administrator -p password] docker logs 131f09 docker start 131f09 *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T00:55:53-08:00, sequoiatools/cmd:7f349e] 60 [pull] sequoiatools/cmd [2023-11-15T00:57:00-08:00, sequoiatools/cmd:1f526a] 300 [pull] sequoiatools/cbq [2023-11-15T01:02:26-08:00, sequoiatools/cbq:1d89d4] -e=http://172.23.96.243:8093 -u=Administrator -p=password -script=ALTER INDEX `default`.default_claims WITH {"action":"replica_count","num_replica": 2} [pull] sequoiatools/cmd [2023-11-15T01:02:33-08:00, sequoiatools/cmd:96f6a4] 300 [pull] sequoiatools/wait_for_idx_build_complete [2023-11-15T01:07:45-08:00, sequoiatools/wait_for_idx_build_complete:bd7c1a] 172.23.120.81 Administrator password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T01:08:11-08:00, sequoiatools/couchbase-cli:7.6:97f010] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services index] docker logs 97f010 docker start 97f010 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T01:08:20-08:00, sequoiatools/couchbase-cli:7.6:a91ff4] rebalance -c 172.23.97.74:8091 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 -u Administrator -p password] docker logs a91ff4 docker start a91ff4 *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T01:08:57-08:00, sequoiatools/cmd:70b87b] 60 [pull] sequoiatools/cmd [2023-11-15T01:10:05-08:00, sequoiatools/cmd:7ca0c0] 300 ########## Cluster config ################## ###### eventing : 2 ===== > [172.23.120.73:8091 172.23.96.48:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 6 ===== > [172.23.120.58:8091 172.23.120.81:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091 172.23.97.149:8091] ########### Test cycle: 1 ended after 5953 seconds [pull] sequoiatools/cmd [2023-11-15T01:15:13-08:00, sequoiatools/cmd:46953b] 600 → parsed tests/eventing/CC/test_eventing_rebalance_integration.yml → parsed providers/file/centos_second_cluster.yml → parsed providers/file/centos_second_cluster.yml [pull] sequoiatools/couchbase-cli:7.6 Test cycle started: 1 → parsed tests/templates/kv.yml → parsed tests/templates/vegeta.yml → parsed tests/templates/rebalance.yml [pull] sequoiatools/eventing:7.0 [2023-11-15T01:25:41-08:00, sequoiatools/eventing:7.0:71581b] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o resume [pull] sequoiatools/eventing:7.0 [2023-11-15T01:25:53-08:00, sequoiatools/eventing:7.0:f727a1] eventing_helper.py -i 172.23.96.48 -u Administrator -p password -o wait_for_state --state deployed ########## Cluster config ################## ###### eventing : 2 ===== > [172.23.120.73:8091 172.23.96.48:8091] ########### ###### cbas : 2 ===== > [172.23.120.74:8091 172.23.120.75:8091] ########### ###### kv : 10 ===== > [172.23.120.77:8091 172.23.120.86:8091 172.23.121.77:8091 172.23.123.25:8091 172.23.123.26:8091 172.23.123.32:8091 172.23.96.14:8091 172.23.97.110:8091 172.23.97.241:8091 172.23.97.74:8091] ########### ###### backup : 1 ===== > [172.23.123.33:8091] ########### ###### fts : 2 ===== > [172.23.96.122:8091 172.23.97.148:8091] ########### ###### n1ql : 2 ===== > [172.23.96.243:8091 172.23.97.105:8091] ########### ###### index : 6 ===== > [172.23.120.58:8091 172.23.120.81:8091 172.23.123.31:8091 172.23.96.254:8091 172.23.97.112:8091 172.23.97.149:8091] ########### Test cycle: 1 ended after 75 seconds [pull] sequoiatools/pillowfight:7.0 [2023-11-15T01:26:37-08:00, sequoiatools/pillowfight:7.0:b43c04] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T01:27:06-08:00, sequoiatools/couchbase-cli:7.6:46cef8] server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data → Error occurred on container - sequoiatools/couchbase-cli:7.6:[server-add -c 172.23.97.74:8091 --server-add https://172.23.120.58 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data] docker logs 46cef8 docker start 46cef8 =ERROR: Prepare join failed. Node is already part of cluster. [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T01:27:23-08:00, sequoiatools/couchbase-cli:7.6:e08928] failover -c 172.23.97.74:8091 --server-failover 172.23.96.14:8091 -u Administrator -p password --hard [pull] sequoiatools/couchbase-cli:7.6 [2023-11-15T01:27:33-08:00, sequoiatools/couchbase-cli:7.6:33852b] rebalance -c 172.23.97.74:8091 -u Administrator -p password → Error occurred on container - sequoiatools/couchbase-cli:7.6:[rebalance -c 172.23.97.74:8091 -u Administrator -p password] docker logs 33852b docker start 33852b *Unable to display progress bar on this os JERROR: Rebalance failed. See logs for detailed reason. You can try again. [pull] sequoiatools/cmd [2023-11-15T01:27:58-08:00, sequoiatools/cmd:3e52d9] 60 [pull] sequoiatools/cmd [2023-11-15T01:29:06-08:00, sequoiatools/cmd:afdd34] 600 [pull] appropriate/curl [2023-11-15T01:39:13-08:00, appropriate/curl:0c2a0b] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/bucket8/bucket8 -d pauseRequested=true [pull] sequoiatools/cmd [2023-11-15T01:39:21-08:00, sequoiatools/cmd:8d5caa] 300 [pull] appropriate/curl [2023-11-15T01:44:28-08:00, appropriate/curl:81020f] -u Administrator:password -X POST http://172.23.97.74:8091/settings/replications/7a8827a7394cecfa8f5860085bee6dcd/bucket8/bucket8 -d pauseRequested=false [pull] sequoiatools/gideon [2023-11-15T01:44:36-08:00, sequoiatools/gideon:19b5a7] kv --ops 500 --create 100 --expire 100 --ttl 660 --hosts 172.23.97.74 --bucket default --sizes 64 [pull] sequoiatools/pillowfight:7.0 [2023-11-15T01:44:41-08:00, sequoiatools/pillowfight:7.0:e09491] -U couchbase://172.23.97.74/default?select_bucket=true -I 1000 -B 100 -t 4 -c 100 -P password