Details
-
Bug
-
Resolution: Fixed
-
Major
-
5.5.0
-
Untriaged
-
Centos 64-bit
-
-
Unknown
Description
Steps to Repro:
1) created 2 clusters cluster1(kv:eventing-index-n1ql)
2) On both clusters created 3 buckets src/dst/meta
3) deployed the following function only on cluster 1
function OnUpdate(doc, meta) {
|
log('document', doc);
|
dst_bucket[meta.id] = doc;
|
}
|
function OnDelete(meta) {
|
delete dst_bucket[meta.id];
|
}
|
Took backup on cluster 1(10.111.170.103/10.111.170.104) using following commands and restored to 2nd cluster(10.111.170.101/10.111.170.102).
[root@node3-cb500-centos6 bin]# ./cbbackupmgr config -a /opt/couchbase/bin/tmp -r repo1 --disable-gsi-indexes
|
Backup repository `repo1` created successfully in archive `/opt/couchbase/bin/tmp`
|
[root@node3-cb500-centos6 bin]# ./cbbackupmgr backup -c couchbase://10.111.170.103:8091 -u Administrator -p password -r repo1 -a /opt/couchbase/bin/tmp
|
|
Backing up to 2018-03-07T02_42_57.996666536-08_00
|
Copied all data in 11s (Avg. 132.11KB/Sec) 1036 items / 1.42MB
|
meta [==========================================================================================================================================================================] 100.00%
|
src [==========================================================================================================================================================================] 100.00%
|
dst [==========================================================================================================================================================================] 100.00%
|
|
Backup successfully completed
|
[root@node3-cb500-centos6 bin]# ./cbbackupmgr restore -c couchbase://10.111.170.101:8091 -u Administrator -p password -r repo1 -a /opt/couchbase/bin/tmp
|
|
(1/1) Restoring backup 2018-03-07T02_42_57.996666536-08_00
|
Copied all data in 6.02s (Avg. 224.31KB/Sec) 1036 items / 1.31MB
|
dst [==========================================================================================================================================================================] 100.00%
|
meta [==========================================================================================================================================================================] 100.00%
|
src [==========================================================================================================================================================================] 100.00%
|
|
Restore completed successfully
|
Data from src/meta/dst bucket got restore to 2nd cluster successfully and function got deployed fine as well. But after we restore on dst bucket eventing doesn't process mutations as expected. On closer inspection of metadata bucket I see the values still having the values that belong to source cluster. See following metadata for key func1::vb::10
{
|
"assigned_worker": "worker_func1_0",
|
"current_vb_owner": "10.111.170.103:8096", --> ip address belongs to source cluster host
|
"dcp_stream_status": "running",
|
"last_checkpoint_time": "2018-03-07T02:42:56-08:00",
|
"last_doc_timer_feedback_seqno": 0,
|
"last_processed_seq_no": 0,
|
"node_uuid": "f289697f1d1734cb6b56678aa1aa9d70",
|
"ownership_history": [{
|
"assigned_worker": "worker_func1_0",
|
"current_vb_owner": "10.111.170.103:8096", --> same here
|
"operation": "bootstrap",
|
"start_seq_no": 0,
|
"timestamp": "2018-03-07 02:27:15.774706101 -0800 PST"
|
}, {
|
"assigned_worker": "worker_func1_0",
|
"current_vb_owner": "10.111.170.103:8096",
|
"operation": "running",
|
"start_seq_no": 0,
|
"timestamp": "2018-03-07 02:27:15.899440939 -0800 PST"
|
}],
|
"previous_assigned_worker": "worker_func1_0",
|
"previous_node_uuid": "f289697f1d1734cb6b56678aa1aa9d70",
|
"previous_node_eventing_dir": "/opt/couchbase/var/lib/couchbase/data/@eventing",
|
"previous_vb_owner": "10.111.170.103:8096",
|
"vb_id": 10,
|
"vb_uuid": 26520361166322,
|
"doc_id_timer_processing_worker": "",
|
"currently_processed_doc_id_timer": "2018-03-07T10:42:57Z",
|
"last_processed_doc_id_timer_event": "2018-03-07T10:27:15Z",
|
"next_doc_id_timer_to_process": "2018-03-07T10:42:58Z",
|
"currently_processed_cron_timer": "2018-03-07T12:05:37Z",
|
"last_processed_cron_timer_event": "",
|
"next_cron_timer_to_process": "2018-03-07T12:06:54Z",
|
"plasma_last_persisted_seq_no": 0
|
}
|
I see 2 possibilities
1) When restoring metadata bucket we change ip address and relevant details to match dst cluster
2) Redeploy once it is restored on dst host, but what if we have millions of mutations ? we need to process all of them again
Abhishek Singh - can you please suggest what should be the best course of action ?
Attachments
Issue Links
- blocks
-
MB-28931 cbbackupmgr: Use new Eventing endpoints.
- Closed