Details
Description
7 Nodes cluster, 1 bucket, 5 ddocs, 5 views, 200K items
Fail over 2 of the nodes and perform rebalance. Rebalance finishes successfully.
Right after rebalance 1 node reported that all other nodes got down and all the nodes reported that this node got down
2012-08-09 08:02:56.309 ns_orchestrator:1:info:message(ns_1@10.2.2.60) - Rebalance completed successfully.
2012-08-09 08:04:46.786 mb_master:0:info:message(ns_1@10.2.2.109) - Haven't heard from a higher priority node or a master, so I'm taking over.
2012-08-09 08:05:40.427 ns_node_disco:5:warning:node down(ns_1@10.2.2.63) - Node 'ns_1@10.2.2.63' saw that node 'ns_1@10.2.2.109' went down.
2012-08-09 08:05:40.435 ns_node_disco:5:warning:node down(ns_1@10.2.2.60) - Node 'ns_1@10.2.2.60' saw that node 'ns_1@10.2.2.109' went down.
2012-08-09 08:05:41.155 ns_node_disco:5:warning:node down(ns_1@10.2.2.64) - Node 'ns_1@10.2.2.64' saw that node 'ns_1@10.2.2.109' went down.
2012-08-09 08:05:42.392 ns_node_disco:5:warning:node down(ns_1@10.2.2.109) - Node 'ns_1@10.2.2.109' saw that node 'ns_1@10.2.2.64' went down.
2012-08-09 08:05:42.393 ns_node_disco:5:warning:node down(ns_1@10.2.2.109) - Node 'ns_1@10.2.2.109' saw that node 'ns_1@10.2.2.60' went down.
2012-08-09 08:05:42.395 ns_node_disco:5:warning:node down(ns_1@10.2.2.109) - Node 'ns_1@10.2.2.109' saw that node 'ns_1@10.2.2.65' went down.
2012-08-09 08:05:42.396 ns_node_disco:5:warning:node down(ns_1@10.2.2.109) - Node 'ns_1@10.2.2.109' saw that node 'ns_1@10.2.2.63' went down.
2012-08-09 08:05:42.442 ns_node_disco:5:warning:node down(ns_1@10.2.2.65) - Node 'ns_1@10.2.2.65' saw that node 'ns_1@10.2.2.109' went down.
I have only 2 diags from nodes