Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-27719

[XDCR] cbrecovery thows out exception when running recovery to cluster that has failover nodes

    XMLWordPrintable

Details

    • Bug
    • Resolution: Duplicate
    • Critical
    • 5.5.0
    • 5.5.0
    • tools
    • Centos 7.4 64-bit
    • Triaged
    • Centos 64-bit
    • Yes

    Description

      Install Couchbase Server 5.5.0-1719 on 8 Centos 7.x servers.
      Create a cluster A with 3 nodes
      Create a cluster B with 3 nodes
      Create default bucket on 2 clusster.
      Create bidirectional replication on 2 clusters.
      Load data to default bucket on cluster A
      Wait for replication is done.
      Failover 2 nodes on cluster B.
      Run command cbrecovery to recovery data from cluster A to cluster B

      /opt/couchbase/bin/cbrecovery http://172.23.121.224:8091 http://172.23.121.227:8091 -b default -B default -u Administrator -p password -U Administrator -P password -v
      

      cbrecovery stopped with exception

      Missing vbuckets to be recovered:[{"node": "ns_1@172.23.123.252", "vbuckets": [513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023]}]
      2018-01-24 11:50:22,712: mt cbrecovery...
      2018-01-24 11:50:22,712: mt  source : http://172.23.121.224:8091
      2018-01-24 11:50:22,712: mt  sink   : http://172.23.121.227:8091
      2018-01-24 11:50:22,712: mt  opts   : {'username': '<xxx>', 'username_destination': 'Administrator', 'verbose': 1, 'extra': {'max_retry': 10.0, 'rehash': 0.0, 'dcp_consumer_queue_length': 1000.0, 'data_only': 1.0, 'uncompress': 0.0, 'nmv_retry': 1.0, 'conflict_resolve': 0.0, 'cbb_max_mb': 100000.0, 'report': 5.0, 'mcd_compatible': 1.0, 'try_xwm': 1.0, 'backoff_cap': 10.0, 'batch_max_bytes': 400000.0, 'report_full': 2000.0, 'flow_control': 1.0, 'batch_max_size': 1000.0, 'seqno': 0.0, 'design_doc_only': 0.0, 'allow_recovery_vb_remap': 1.0, 'recv_min_bytes': 4096.0}, 'collection': None, 'ssl': False, 'threads': 4, 'key': None, 'password': '<xxx>', 'id': None, 'silent': False, 'dry_run': False, 'password_destination': 'password', 'bucket_destination': 'default', 'vbucket_list': '{"ns_1@172.23.123.252": [513]}', 'separator': '::', 'bucket_source': 'default'}
      2018-01-24 11:50:22,727: mt Starting new HTTP connection (1): 172.23.121.224
      2018-01-24 11:50:22,768: mt Starting new HTTP connection (1): 172.23.121.227
      2018-01-24 11:50:22,784: mt bucket: default
      2018-01-24 11:50:23,309: w0   source : http://172.23.121.224:8091(default@172.23.121.224:8091)
      2018-01-24 11:50:23,310: w0   sink   : http://172.23.121.227:8091(default@172.23.121.224:8091)
      2018-01-24 11:50:23,310: w0          :                total |       last |    per sec
      2018-01-24 11:50:23,511: w2   source : http://172.23.121.224:8091(default@172.23.121.226:8091)
      2018-01-24 11:50:23,512: w2   sink   : http://172.23.121.227:8091(default@172.23.121.226:8091)
      2018-01-24 11:50:23,512: w2          :                total |       last |    per sec
      2018-01-24 11:50:33,751: s1 error: recv socket.timeout
      2018-01-24 11:50:33,751: s1 MCSink exception: 
      2018-01-24 11:50:33,751: s1 error: async operation: error: MCSink exception:  on sink: http://172.23.121.227:8091(default@172.23.121.225:8091)
      2018-01-24 11:50:33,752: w1   source : http://172.23.121.224:8091(default@172.23.121.225:8091)
      2018-01-24 11:50:33,752: w1   sink   : http://172.23.121.227:8091(default@172.23.121.225:8091)
      2018-01-24 11:50:33,752: w1          :                total |       last |    per sec
      error: MCSink exception: 
      [root@s44015 ~]# 
      

      I will find which build is the last stable one.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              thuan Thuan Nguyen
              thuan Thuan Nguyen
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty