This is a corner case touching the dark side of bucket identity management in XDCR (probably a generic issue in couchbase server).
This happens only when
1) you delete a bucket during an inbound ongoing replication, without deleting the replication first, and immediately create another one with the same name
2) after you re-create the bucket with the same name, you load some data on the source to wake up some vb replicators
IMHO, the expected behavior is that no replication should happen after you re-create the bucket, because even with the same name, they are considered completely different buckets and thus no replication should resume.
Today XDCR does not check the UUID and make sure the remote cluster is still the old one when replicator is initialized. Instead, it is just trying fetch the remote cluster and start replicating. In short, we do not maintain the identity of remote cluster during XDCR, although we do that during replication of single vb replicator. In this test case filed by Deepkaran, data loading at step 7 woke up few vb replicators at C2, and they started replication without checking the bucket identity change at the other side.
Some thoughts to fix the issue
First of all, today we should recommend or requite users to delete any bucket only AFTER the users have deleted all XDCR toward that bucket. It is not clear to me to how to identify an inbound XDCR for a bucket since the incoming traffic from XDCR is just a stream of setMeta/getMeta/deleteWithMeta
Second, probably we should maintain the remote bucket identity for the whole XDCR, instead of single vb replicator. Say, we can store the UUID of remote bucket when XDCR is created, and each time when vb replicator is initialized we should check to make sure the remote bucket UUID does not change.
This fix may possibly involve change in both XDCR and remote_cluster_info module. At this time, it does not look a blocker to me, and I would like to defer the fix to 2.0.1 given the limited timing.