From: Anil Kumar Subject: Re: Remote cluster deletion Date: January 20, 2015 at 2:11:20 PM PST To: Xiaomei Zhang Cc: Aruna Piravi , Yu Sui , Cihan Biyikoglu , John Liang Hi Xiaomei,  Sure, please check my responses inline.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Monday, January 19, 2015 at 5:13 PM To: Anil Kumar Cc: Aruna Piravi , Yu Sui , Cihan Biyikoglu , John Liang Subject: Re: Remote cluster deletion Anil, Thanks for your response. I have a few further questions. [Anil] - As Aruna pointed out in the ticket MB-9500 the flow should be - not to show 'Delete' option or allow deleting 'Cluster Reference' unless its empty (no replications),   If I understand you correctly, This means: admin has to pause or cancel all the replications that refer to this remote cluster reference before they can “Delete” the remote cluster reference. Is that correct? [Anil] - What I meant with that is we don’t give an option to users to delete the cluster reference unless until its empty (no replications). On UI we don’t show the “Delete” button whereas on REST API/CLI we fail the request with error message “Cluster Reference cannot be delete at this point since there is one or more replication still ongoing to the remote cluster” (something like that). Hope it clarifies.  users can edit cluster reference to change the referenced hostname/IP.  If user change the host name or IP, would would be the correct behavior of the current running replication? Do you expect the replication to old hostname/ip (please note that they can still be valid) stop and the replication to new hostname /ip starts implicitly?  [Anil] - Ability to change the IP/Hostname is only to keep the cluster reference healthy in-case user wants to create more replication streams to different bucket's. Whereas for existing ongoing replication it shouldn’t affect. Hope it clarifies.  In the past we found out that’s not achievable due to infrastructure limitation in erlang-XDCR because of race condition. It will be great to find out if this can achieved in Go-XDCR. Let us know what you think.  It would help, if we know what kind of racing condition that erlang-XDCR has. [Anil] - Certainly.  All I got from Alk was infrastructure limitation i.e. race condition but he did mention Aliaksey A can explain more about the race conditions. Let me know if you want to reach out to Aliaksey A directly Or I can call for a quick meeting to discuss.  Thanks, -Xiaomei On Jan 19, 2015, at 4:51 PM, Anil Kumar wrote: Thanks Aruna.  @Xiaomei – Please check my comments inline for the questions.  The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? [Anil] - As Aruna pointed out in the ticket MB-9500 the flow should be - not to show 'Delete' option or allow deleting 'Cluster Reference' unless its empty (no replications), users can edit cluster reference to change the referenced hostname/IP.  In the past we found out that’s not achievable due to infrastructure limitation in erlang-XDCR because of race condition. It will be great to find out if this can achieved in Go-XDCR. Let us know what you think.  2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? [Anil] - Yes, Cluster reference name is the key for a remote cluster. We should not allow the identical names for same remote cluster (this is not the case currently).  Please let me know if you’ve any questions.  Thanks! Anil Kumar   From: Aruna Piravi Date: Monday, January 19, 2015 at 3:29 PM To: Anil Kumar Cc: Xiaomei Zhang , Yu Sui Subject: Re: Remote cluster deletion Hi Anil/Xiaomei, Pls find https://issues.couchbase.com/browse/MB-9500 open along the same lines Xiaomei has proposed(deleting all replications before we delete a cluster reference). It also contains Anil’s decision on the same. Thanks Aruna From: Anil Kumar Date: Mon, 19 Jan 2015 12:00:07 -0800 To: Aruna Piravi Subject: FW: Remote cluster deletion From: Anil Kumar Date: Thursday, January 15, 2015 at 6:43 PM To: Xiaomei Zhang , Yu Sui Cc: Cihan Biyikoglu Subject: Re: Remote cluster deletion Hi Xiaomei, Yu Just wanted to let you know that I’m on it. I’m going through the different scenarios (testing how they behave) and talked to Yu about the scenario’s he described.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Thursday, January 15, 2015 at 11:38 AM To: Yu Sui Cc: Anil Kumar , Cihan Biyikoglu Subject: Re: Remote cluster deletion Anil, The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? 2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? The current erlang xdcr has some inconsistency on these area. We are not sure if these inconsistence has a reason or they are bugs. We don’t want to break the existing customers if anyone reply on those behavior. That is why Yu is checking with you. Thanks, -Xiaomei On Jan 15, 2015, at 11:12 AM, Yu Sui wrote: Anil, In existing xdcr, when a remote cluster is deleted, the cluster object is not removed, but rather has its “deleted” attribute set to “true”, assumably to allow existing replications referencing the remote cluster to continue running. This could lead to confusing behavior when remote cluster is recreated. I feel that an alternative, more explicit and more consistent, approach may be in order, and wish to get your opinion on that.  I. Existing behavior Things are good in the following scenario. This is probably what the “deleted” attribute/feature is for. 1. Create a remote cluster with name c1 and hostname h1 2. Create a replication for remote cluster c1. It shows that it is referencing c1 on UI. 3. Delete c1.  1. Remote cluster still exists with “deleted” set to true. It is no longer visible from remote cluster list on UI though, and can no longer be picked when creating new replications. 2. Replication shows that it is referencing h1 instead of c1 on UI.  4. Existing replication on c1 can still run, even after server restart.  Things get confusing after we re-create the remote cluster afterward. Scenario 2: 5.  Create a remote cluster with name c1 and hostname h2  — I.e., a different cluster with the same name 6. A second cluster is created with name being c1. If we use remote cluster rest api to query for existing clusters, we would see two clusters with the same name and different hostnames. One with “deleted” set to true and one with “deleted” set to false. 7. Delete the second cluster.  Now we have two clusters with the same name, c1, and different hostnames and both having “deleted” set to true. This could be confusing to untrained eyes. 8. Existing replication could still run since it is referencing h1 instead of c1, and there is only one cluster with hostname h1.      Scenario 3: 5.  Create a remote cluster with name c2 and hostname h1  — I.e., the same cluster with a different name 6. A new cluster is created and the old cluster is GONE. Apparently the hostname is the actual unique key for remote clusters and the new cluster is considered a replacement for the old, even though they have different names.  7. Existing replication can still run. It shows that it is referencing c2 now.   These behavior are confusing to me. The flipping between cluster name and cluster hostname on UI exposes our internal implementations to customers. I am not sure if the behavior in scenario 3 is what customers expect or want. I don’t think we have any documentation on these, which makes matter worse. II New proposal In my opinion, the following approach is simpler, more explicit, easier to understand, and ensures more consistent behavior: 1. Deleting a cluster really means deleting it. It is gone for ever. It won’t interfere with new remote clusters in any way. 2. Before a cluster can be deleted, all existing replications referencing the cluster have to be deleted. It does require a bit more admin work, but it is a small price to pay. What do you think? Thanks, Yu From: Anil Kumar Subject: Re: Remote cluster deletion Date: January 19, 2015 at 4:51:59 PM PST To: Aruna Piravi , Xiaomei Zhang , Yu Sui Cc: Cihan Biyikoglu , John Liang Thanks Aruna.  @Xiaomei – Please check my comments inline for the questions.  The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? [Anil] - As Aruna pointed out in the ticket MB-9500 the flow should be - not to show 'Delete' option or allow deleting 'Cluster Reference' unless its empty (no replications), users can edit cluster reference to change the referenced hostname/IP.  In the past we found out that’s not achievable due to infrastructure limitation in erlang-XDCR because of race condition. It will be great to find out if this can achieved in Go-XDCR. Let us know what you think.  2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? [Anil] - Yes, Cluster reference name is the key for a remote cluster. We should not allow the identical names for same remote cluster (this is not the case currently).  Please let me know if you’ve any questions.  Thanks! Anil Kumar   From: Aruna Piravi Date: Monday, January 19, 2015 at 3:29 PM To: Anil Kumar Cc: Xiaomei Zhang , Yu Sui Subject: Re: Remote cluster deletion Hi Anil/Xiaomei, Pls find https://issues.couchbase.com/browse/MB-9500 open along the same lines Xiaomei has proposed(deleting all replications before we delete a cluster reference). It also contains Anil’s decision on the same. Thanks Aruna From: Anil Kumar Date: Mon, 19 Jan 2015 12:00:07 -0800 To: Aruna Piravi Subject: FW: Remote cluster deletion From: Anil Kumar Date: Thursday, January 15, 2015 at 6:43 PM To: Xiaomei Zhang , Yu Sui Cc: Cihan Biyikoglu Subject: Re: Remote cluster deletion Hi Xiaomei, Yu Just wanted to let you know that I’m on it. I’m going through the different scenarios (testing how they behave) and talked to Yu about the scenario’s he described.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Thursday, January 15, 2015 at 11:38 AM To: Yu Sui Cc: Anil Kumar , Cihan Biyikoglu Subject: Re: Remote cluster deletion Anil, The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? 2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? The current erlang xdcr has some inconsistency on these area. We are not sure if these inconsistence has a reason or they are bugs. We don’t want to break the existing customers if anyone reply on those behavior. That is why Yu is checking with you. Thanks, -Xiaomei On Jan 15, 2015, at 11:12 AM, Yu Sui wrote: Anil, In existing xdcr, when a remote cluster is deleted, the cluster object is not removed, but rather has its “deleted” attribute set to “true”, assumably to allow existing replications referencing the remote cluster to continue running. This could lead to confusing behavior when remote cluster is recreated. I feel that an alternative, more explicit and more consistent, approach may be in order, and wish to get your opinion on that.  I. Existing behavior Things are good in the following scenario. This is probably what the “deleted” attribute/feature is for. 1. Create a remote cluster with name c1 and hostname h1 2. Create a replication for remote cluster c1. It shows that it is referencing c1 on UI. 3. Delete c1.  1. Remote cluster still exists with “deleted” set to true. It is no longer visible from remote cluster list on UI though, and can no longer be picked when creating new replications. 2. Replication shows that it is referencing h1 instead of c1 on UI.  4. Existing replication on c1 can still run, even after server restart.  Things get confusing after we re-create the remote cluster afterward. Scenario 2: 5.  Create a remote cluster with name c1 and hostname h2  — I.e., a different cluster with the same name 6. A second cluster is created with name being c1. If we use remote cluster rest api to query for existing clusters, we would see two clusters with the same name and different hostnames. One with “deleted” set to true and one with “deleted” set to false. 7. Delete the second cluster.  Now we have two clusters with the same name, c1, and different hostnames and both having “deleted” set to true. This could be confusing to untrained eyes. 8. Existing replication could still run since it is referencing h1 instead of c1, and there is only one cluster with hostname h1.      Scenario 3: 5.  Create a remote cluster with name c2 and hostname h1  — I.e., the same cluster with a different name 6. A new cluster is created and the old cluster is GONE. Apparently the hostname is the actual unique key for remote clusters and the new cluster is considered a replacement for the old, even though they have different names.  7. Existing replication can still run. It shows that it is referencing c2 now.   These behavior are confusing to me. The flipping between cluster name and cluster hostname on UI exposes our internal implementations to customers. I am not sure if the behavior in scenario 3 is what customers expect or want. I don’t think we have any documentation on these, which makes matter worse. II New proposal In my opinion, the following approach is simpler, more explicit, easier to understand, and ensures more consistent behavior: 1. Deleting a cluster really means deleting it. It is gone for ever. It won’t interfere with new remote clusters in any way. 2. Before a cluster can be deleted, all existing replications referencing the cluster have to be deleted. It does require a bit more admin work, but it is a small price to pay. What do you think? Thanks, Yu From: Aruna Piravi Subject: Re: Remote cluster deletion Date: January 19, 2015 at 3:29:29 PM PST To: Anil Kumar Cc: Xiaomei Zhang , Yu Sui Hi Anil/Xiaomei, Pls find https://issues.couchbase.com/browse/MB-9500 open along the same lines Xiaomei has proposed(deleting all replications before we delete a cluster reference). It also contains Anil’s decision on the same. Thanks Aruna From: Anil Kumar Date: Mon, 19 Jan 2015 12:00:07 -0800 To: Aruna Piravi Subject: FW: Remote cluster deletion From: Anil Kumar Date: Thursday, January 15, 2015 at 6:43 PM To: Xiaomei Zhang , Yu Sui Cc: Cihan Biyikoglu Subject: Re: Remote cluster deletion Hi Xiaomei, Yu Just wanted to let you know that I’m on it. I’m going through the different scenarios (testing how they behave) and talked to Yu about the scenario’s he described.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Thursday, January 15, 2015 at 11:38 AM To: Yu Sui Cc: Anil Kumar , Cihan Biyikoglu Subject: Re: Remote cluster deletion Anil, The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? 2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? The current erlang xdcr has some inconsistency on these area. We are not sure if these inconsistence has a reason or they are bugs. We don’t want to break the existing customers if anyone reply on those behavior. That is why Yu is checking with you. Thanks, -Xiaomei On Jan 15, 2015, at 11:12 AM, Yu Sui wrote: Anil, In existing xdcr, when a remote cluster is deleted, the cluster object is not removed, but rather has its “deleted” attribute set to “true”, assumably to allow existing replications referencing the remote cluster to continue running. This could lead to confusing behavior when remote cluster is recreated. I feel that an alternative, more explicit and more consistent, approach may be in order, and wish to get your opinion on that.  I. Existing behavior Things are good in the following scenario. This is probably what the “deleted” attribute/feature is for. 1. Create a remote cluster with name c1 and hostname h1 2. Create a replication for remote cluster c1. It shows that it is referencing c1 on UI. 3. Delete c1.  1. Remote cluster still exists with “deleted” set to true. It is no longer visible from remote cluster list on UI though, and can no longer be picked when creating new replications. 2. Replication shows that it is referencing h1 instead of c1 on UI.  4. Existing replication on c1 can still run, even after server restart.  Things get confusing after we re-create the remote cluster afterward. Scenario 2: 5.  Create a remote cluster with name c1 and hostname h2  — I.e., a different cluster with the same name 6. A second cluster is created with name being c1. If we use remote cluster rest api to query for existing clusters, we would see two clusters with the same name and different hostnames. One with “deleted” set to true and one with “deleted” set to false. 7. Delete the second cluster.  Now we have two clusters with the same name, c1, and different hostnames and both having “deleted” set to true. This could be confusing to untrained eyes. 8. Existing replication could still run since it is referencing h1 instead of c1, and there is only one cluster with hostname h1.      Scenario 3: 5.  Create a remote cluster with name c2 and hostname h1  — I.e., the same cluster with a different name 6. A new cluster is created and the old cluster is GONE. Apparently the hostname is the actual unique key for remote clusters and the new cluster is considered a replacement for the old, even though they have different names.  7. Existing replication can still run. It shows that it is referencing c2 now.   These behavior are confusing to me. The flipping between cluster name and cluster hostname on UI exposes our internal implementations to customers. I am not sure if the behavior in scenario 3 is what customers expect or want. I don’t think we have any documentation on these, which makes matter worse. II New proposal In my opinion, the following approach is simpler, more explicit, easier to understand, and ensures more consistent behavior: 1. Deleting a cluster really means deleting it. It is gone for ever. It won’t interfere with new remote clusters in any way. 2. Before a cluster can be deleted, all existing replications referencing the cluster have to be deleted. It does require a bit more admin work, but it is a small price to pay. What do you think? Thanks, Yu From: Anil Kumar Subject: Re: Remote cluster deletion Date: January 20, 2015 at 4:03:06 PM PST To: Xiaomei Zhang Cc: Aruna Piravi , Yu Sui , Cihan Biyikoglu , John Liang Thanks Xiaomei for the quick discussion.  You’re right I realized the second functional requirement is in-complete it didn’t address the changes to SSL feature.  I went back to original Spec I wrote for XDCR SSL feature here and I wanted the SSL to enabled on per-bucket basis. But we decided to do it at the cluster reference since it doesn’t make sense to have one secured and another unsecured replication streams to same remote cluster.  Right now the way its designed user can at any point edit cluster reference setting for SSL and it dynamically apply the new setting to all the underlying replication streams without user needing to pause/resume or drop/recreate replication.  Anil Kumar From: Anil Kumar Date: Tuesday, January 20, 2015 at 2:13 PM To: Xiaomei Zhang Cc: Aruna Piravi , Yu Sui , Cihan Biyikoglu , John Liang Subject: Re: Remote cluster deletion Hi Xiaomei,  Sure, please check my responses inline.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Monday, January 19, 2015 at 5:13 PM To: Anil Kumar Cc: Aruna Piravi , Yu Sui , Cihan Biyikoglu , John Liang Subject: Re: Remote cluster deletion Anil, Thanks for your response. I have a few further questions. [Anil] - As Aruna pointed out in the ticket MB-9500 the flow should be - not to show 'Delete' option or allow deleting 'Cluster Reference' unless its empty (no replications),   If I understand you correctly, This means: admin has to pause or cancel all the replications that refer to this remote cluster reference before they can “Delete” the remote cluster reference. Is that correct? [Anil] - What I meant with that is we don’t give an option to users to delete the cluster reference unless until its empty (no replications). On UI we don’t show the “Delete” button whereas on REST API/CLI we fail the request with error message “Cluster Reference cannot be delete at this point since there is one or more replication still ongoing to the remote cluster” (something like that). Hope it clarifies.  users can edit cluster reference to change the referenced hostname/IP.  If user change the host name or IP, would would be the correct behavior of the current running replication? Do you expect the replication to old hostname/ip (please note that they can still be valid) stop and the replication to new hostname /ip starts implicitly?  [Anil] - Ability to change the IP/Hostname is only to keep the cluster reference healthy in-case user wants to create more replication streams to different bucket's. Whereas for existing ongoing replication it shouldn’t affect. Hope it clarifies.  In the past we found out that’s not achievable due to infrastructure limitation in erlang-XDCR because of race condition. It will be great to find out if this can achieved in Go-XDCR. Let us know what you think.  It would help, if we know what kind of racing condition that erlang-XDCR has. [Anil] - Certainly.  All I got from Alk was infrastructure limitation i.e. race condition but he did mention Aliaksey A can explain more about the race conditions. Let me know if you want to reach out to Aliaksey A directly Or I can call for a quick meeting to discuss.  Thanks, -Xiaomei On Jan 19, 2015, at 4:51 PM, Anil Kumar wrote: Thanks Aruna.  @Xiaomei – Please check my comments inline for the questions.  The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? [Anil] - As Aruna pointed out in the ticket MB-9500 the flow should be - not to show 'Delete' option or allow deleting 'Cluster Reference' unless its empty (no replications), users can edit cluster reference to change the referenced hostname/IP.  In the past we found out that’s not achievable due to infrastructure limitation in erlang-XDCR because of race condition. It will be great to find out if this can achieved in Go-XDCR. Let us know what you think.  2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? [Anil] - Yes, Cluster reference name is the key for a remote cluster. We should not allow the identical names for same remote cluster (this is not the case currently).  Please let me know if you’ve any questions.  Thanks! Anil Kumar   From: Aruna Piravi Date: Monday, January 19, 2015 at 3:29 PM To: Anil Kumar Cc: Xiaomei Zhang , Yu Sui Subject: Re: Remote cluster deletion Hi Anil/Xiaomei, Pls find https://issues.couchbase.com/browse/MB-9500 open along the same lines Xiaomei has proposed(deleting all replications before we delete a cluster reference). It also contains Anil’s decision on the same. Thanks Aruna From: Anil Kumar Date: Mon, 19 Jan 2015 12:00:07 -0800 To: Aruna Piravi Subject: FW: Remote cluster deletion From: Anil Kumar Date: Thursday, January 15, 2015 at 6:43 PM To: Xiaomei Zhang , Yu Sui Cc: Cihan Biyikoglu Subject: Re: Remote cluster deletion Hi Xiaomei, Yu Just wanted to let you know that I’m on it. I’m going through the different scenarios (testing how they behave) and talked to Yu about the scenario’s he described.  Thanks! Anil Kumar From: Xiaomei Zhang Date: Thursday, January 15, 2015 at 11:38 AM To: Yu Sui Cc: Anil Kumar , Cihan Biyikoglu Subject: Re: Remote cluster deletion Anil, The functional questions are:  1. what would be the behavior or flow to delete a remote cluster reference? What would be the appropriate behavior for the existing replications that are based on this remote cluster? Should they still be running after the remote cluster reference is deleted? 2. Is remote cluster reference name is the key for a remote cluster reference? Meaning no remote cluster references with the identical name should be allowed in the system? The current erlang xdcr has some inconsistency on these area. We are not sure if these inconsistence has a reason or they are bugs. We don’t want to break the existing customers if anyone reply on those behavior. That is why Yu is checking with you. Thanks, -Xiaomei On Jan 15, 2015, at 11:12 AM, Yu Sui wrote: Anil, In existing xdcr, when a remote cluster is deleted, the cluster object is not removed, but rather has its “deleted” attribute set to “true”, assumably to allow existing replications referencing the remote cluster to continue running. This could lead to confusing behavior when remote cluster is recreated. I feel that an alternative, more explicit and more consistent, approach may be in order, and wish to get your opinion on that.  I. Existing behavior Things are good in the following scenario. This is probably what the “deleted” attribute/feature is for. 1. Create a remote cluster with name c1 and hostname h1 2. Create a replication for remote cluster c1. It shows that it is referencing c1 on UI. 3. Delete c1.  1. Remote cluster still exists with “deleted” set to true. It is no longer visible from remote cluster list on UI though, and can no longer be picked when creating new replications. 2. Replication shows that it is referencing h1 instead of c1 on UI.  4. Existing replication on c1 can still run, even after server restart.  Things get confusing after we re-create the remote cluster afterward. Scenario 2: 5.  Create a remote cluster with name c1 and hostname h2  — I.e., a different cluster with the same name 6. A second cluster is created with name being c1. If we use remote cluster rest api to query for existing clusters, we would see two clusters with the same name and different hostnames. One with “deleted” set to true and one with “deleted” set to false. 7. Delete the second cluster.  Now we have two clusters with the same name, c1, and different hostnames and both having “deleted” set to true. This could be confusing to untrained eyes. 8. Existing replication could still run since it is referencing h1 instead of c1, and there is only one cluster with hostname h1.      Scenario 3: 5.  Create a remote cluster with name c2 and hostname h1  — I.e., the same cluster with a different name 6. A new cluster is created and the old cluster is GONE. Apparently the hostname is the actual unique key for remote clusters and the new cluster is considered a replacement for the old, even though they have different names.  7. Existing replication can still run. It shows that it is referencing c2 now.   These behavior are confusing to me. The flipping between cluster name and cluster hostname on UI exposes our internal implementations to customers. I am not sure if the behavior in scenario 3 is what customers expect or want. I don’t think we have any documentation on these, which makes matter worse. II New proposal In my opinion, the following approach is simpler, more explicit, easier to understand, and ensures more consistent behavior: 1. Deleting a cluster really means deleting it. It is gone for ever. It won’t interfere with new remote clusters in any way. 2. Before a cluster can be deleted, all existing replications referencing the cluster have to be deleted. It does require a bit more admin work, but it is a small price to pay. What do you think? Thanks, Yu