Details
Description
source:
172.23.105.158
172.23.105.156
172.23.105.157
172.23.105.22
destination:
172.23.105.160
172.23.105.159
172.23.105.206
172.23.105.207
steps:
1) data load about 6 hours on source
2) setup replication from cluster A to cluster B. - PASS
bucket per bucket
3) rebalance in at cluster A - PASS
rebalance in at cluster B - PASS
4) click failover, Graceful Fail Over(rebalance) for node in cluster A, add back(Delta Recovery) and rebalance - PASS
5) click failover, Hard Fail Over for node in cluster A, add back(Full Recovery) and rebalance - PASS
6) remove node in cluster A, stop rebalance. Cancel removing node and rebalance - PASS
7) rebalance out 1 node on cluster A - PASS
8) rebalance out 1 nodes on cluster B - PASS
9) rebalance in 2 nodes on cluster A - PASS
10) autofailover 1 node via reboot on cluster A - PASS
11) rebalance in 1 node on cluster B, stop rebalance, remove the node and rebalance - PASS
12) rebalance in 1 node on cluster B - PASS
13) autofailover 1 node via reboot on cluster B - PASS
all these steps are performed during continuous dataload then I found that destination cluster contains about ~70 of source clusters. I stopped all activities and dataload.
not a dgm
clusters are alive now
source: http://172.23.105.156:809/
AbRegNums 4 416733 0 0 1009MB /
1.95GB 888MB /
1.19GB
MsgsCalls 4 5697 0 0 117MB /
1.17GB 117MB /
132MB
RevAB 4 9482971 0 0 2.24GB /
17.5GB 1.74GB /
2.1GB
UserInfo 4 75770 0 0 109MB /
1.17GB 184MB /
199MB
destination: http://172.23.105.159:8091/
AbRegNums 3 372374 0 0 1.02GB /
1.46GB 793MB /
997MB
MsgsCalls 3 4145 0 0 83.7MB /
900MB 51.2MB /
59.1MB
RevAB 3 6612345 0 0 1.58GB /
13.1GB 1.21GB /
1.62GB
UserInfo 3 66273 0 0 98.6MB /
900MB 153MB /
158MB