Details
Description
Steps:
- Create a 2 node cluster(only data nodes)(172.23.107.237, 172.23.107.232)
- Create a couchstore bucket
- Enable migration from couchstore to magma using below api
curl -v POST -u Administrator:password http://172.23.107.232:8091/pools/default/buckets/default2 -d 'name=default2&storageBackend=magma'
- Rebalance in a node( 172.23.107.126)
[root@se1701-cnt7 ~]# [root@se1701-cnt7 ~]# curl -u Administrator:password http://172.23.107.237:8091/pools/default/buckets/default2/ | jq '.nodes[].storageBackend'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 19169 100 19169 0 0 836k 0 --:--:-- --:--:-- --:--:-- 850k
"couchstore"
null
"couchstore"
- Rebalance out a node(rebalance out node: 172.23.107.232)
[root@se1701-cnt7 ~]# curl -u Administrator:password http://172.23.107.237:8091/pools/default/buckets/default2/ | jq '.nodes[].storageBackend'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 11549 100 11549 0 0 527k 0 --:--:-- --:--:-- --:--:-- 563k
"couchstore"
null
- Hardfailover +full recovery remaining node( node 172.23.121.237) and rebalance
[root@se1701-cnt7 ~]# curl -u Administrator:password http://172.23.107.237:8091/pools/default/buckets/default2/ | jq '.nodes[].storageBackend'
% Total % Received % Xferd Average Speed Time Time Time Current
Dload Upload Total Spent Left Speed
100 17808 100 17808 0 0 708k 0 --:--:-- --:--:-- --:--:-- 724k
null
null
- Try enabling CDC on the migrated bucket with storage=magma
curl localhost:8091/pools/default/buckets/default2 -u Administrator:password -X POST -d historyRetentionCollectionDefault=true
Cannot update bucket while storage mode is being migrated.-bash-4.2#
curl localhost:8091/pools/default/buckets/default2 -u Administrator:password -X POST -d historyRetentionSeconds=13600
Cannot update bucket while storage mode is being migrated.-bash-4.2#
Tried updating replicas also but was getting same error
Note: When I tried hard-failover+full recovery of both the nodes(one at a time), I haven't seen above issue
System is still in the same state: http://172.23.107.237:8091/ui/index.html