Details
-
Bug
-
Resolution: Fixed
-
Blocker
-
2.0-developer-preview-4
-
Security Level: Public
-
None
Description
Created 10 node cluster. Created a view {"reduce":{"map":"function (doc)
{\n emit(doc._id, null);\n}","reduce":"_count"} and uploaded 100k json items using mcsoda. Queried the view with stale=false. Result was correct. Started removing nodes one by one from a cluster while running view queries. After second node was removed the view started returning more than 100k items. I figured out that all duplicated rows come from a single node. And on this node all the duplicated rows come from three vbuckets: 215, 216, 217. There was a period of time when these vbuckets were reported by set views both as passive and replicas:
Set view `default`, main group `_design/dev_test`, partition states updated
active partitions before: [73,74,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,101,102,103,240,241,242]
active partitions after: [73,74,78,79,80,81,82,83,84,85,86,87,88,89,90,91,92,93,94,95,96,97,98,101,102,103,240,241,242]
passive partitions before: [215,216,217]
passive partitions after: [215,216,217]
cleanup partitions before: []
cleanup partitions after: []
replica partitions before: [6,7,8,32,33,34,58,59,60,113,114,115,127,139,140,141,155,164,165,188,189,190,208,211,214,215,216,217,233,236,239,244,249]
replica partitions after: [6,7,8,32,33,34,58,59,60,113,114,115,127,139,140,141,155,164,165,188,189,190,208,211,214,215,216,217,233,236,239,244,249]
replicas on transfer before: [215,216,217]
replicas on transfer after: [215,216,217]
Sequence of calls that was performed by ns_server seems to be correct. I'm attaching full logs and diag from this node.