Details
-
Bug
-
Resolution: Duplicate
-
Critical
-
None
-
7.1.0
-
Untriaged
-
-
1
-
Unknown
Description
Build : 7.1.0-2298
Test : -test tests/integration/neo/test_neo_magma_milestone4.yml -scope tests/integration/neo/scope_neo_magma.yml
Scale : 3
Iteration : 2nd
In the Magma longevity test, there was a rebalance operation involving multiple KV nodes :
[2022-02-15T06:44:29-08:00, sequoiatools/couchbase-cli:7.1:e43442] server-add -c 172.23.108.139:8091 --server-add https://172.23.108.141 -u Administrator -p password --server-add-username Administrator --server-add-password password --services data
|
→ Error while waiting for container:%!(EXTRA *docker.NoSuchContainer=No such container: bdf0c28718f2f95537e9b566f6d64649d29e8b95d290ef4ace4001366bd374de)
|
[2022-02-15T06:45:21-08:00, sequoiatools/couchbase-cli:7.1:bcdf59] failover -c 172.23.108.139:8091 --server-failover 172.23.107.236:8091 -u Administrator -p password
|
[2022-02-15T06:47:02-08:00, sequoiatools/couchbase-cli:7.1:08f7c9] failover -c 172.23.108.139:8091 --server-failover 172.23.108.143:8091 -u Administrator -p password --hard
|
[2022-02-15T06:47:12-08:00, sequoiatools/couchbase-cli:7.1:41756c] rebalance -c 172.23.108.139:8091 -u Administrator -p password
|
|
2022-02-15T08:01:08-08:00 - ERROR: Rebalance failed. See logs for detailed reason. You can try again.
|
This rebalance failed due to an error in the index service. As per the error.log on 172.23.108.139 :
[ns_server:error,2022-02-15T08:01:00.616-08:00,ns_1@172.23.108.139:service_rebalancer-index<0.17637.949>:service_rebalancer:run_rebalance_worker:119]Worker terminated abnormally: {'EXIT',<0.19195.949>,
|
{rebalance_failed,
|
{service_error,
|
<<"{\"16977870706542120525\":\"Error Connecting KV 127.0.0.1:8091 Err MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg: \",\"17681762282016610388\":\"Error Connecting KV 127.0.0.1:8091 Err MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg: \"}">>}}}
|
[user:error,2022-02-15T08:01:00.618-08:00,ns_1@172.23.108.139:<0.26787.0>:ns_orchestrator:log_rebalance_completion:1428]Rebalance exited with reason {service_rebalance_failed,index,
|
{worker_died,
|
{'EXIT',<0.19195.949>,
|
{rebalance_failed,
|
{service_error,
|
<<"{\"16977870706542120525\":\"Error Connecting KV 127.0.0.1:8091 Err MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg: \",\"17681762282016610388\":\"Error Connecting KV 127.0.0.1:8091 Err MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg: \"}">>}}}}}.
|
Rebalance Operation Id = f8f994b26de6f2c5aaca397cde77f825
|
From the indexer.log on 172.23.104.249, we see the following errors before the rebalance terminated :
2022-02-15T08:00:47.774-08:00 [Info] KVSender::sendRestartVbuckets Projector 172.23.108.134:9999 Topic MAINT_STREAM_TOPIC_aeb444ff59d110e8567b54a591940ba7 bucket6 bucket6
|
2022-02-15T08:00:47.777-08:00 [Error] KVSender::sendRestartVbuckets Unexpected Error During Restart Vbuckets Request for Projector 172.23.108.134:9999 Topic MAINT_STREAM_TOPIC_aeb444ff59d110e8567b54a591940ba7 bucket6 bucket6 . Err feed.invalidBucket.
|
2022-02-15T08:00:47.777-08:00 [Error] KVSender::restartVbuckets MAINT_STREAM bucket6 Error Received feed.invalidBucket from 172.23.108.134:9999
|
2022-02-15T08:00:47.833-08:00 [Error] GetMcConn(): MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg:
|
2022-02-15T08:00:47.835-08:00 [Warn] Indexer::getCurrentKVTs error=MCResponse status=KEY_ENOENT, opcode=0x89, opaque=0, msg: Retrying (2)
|
|
Index nodes : 172.23.104.249, 172.23.105.0, 172.23.105.39, 172.23.106.54, 172.23.108.136, 172.23.108.138
Attachments
Issue Links
- duplicates
-
MB-50967 [System Test][Magma] Memcached crash along with service exit observed in longevity
- Closed