Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-8151

cbRecovery - incorrect behavior after running cbrecovery against a specific vbucket

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • 2.1.0
    • 2.1.0
    • tools
    • Security Level: Public
    • None
    • 2.0.2-769

    Description

      • 1:3 node unidirectional replication
      • items up to sync
      • failover 2 nodes on destination
      • dry run shows:

      root@plum-008:~# /opt/couchbase/bin/cbrecovery -u Administrator -p password -U Administrator -P password http://localhost:8091 http://10.3.3.61:8091 -n
      2013-04-24 15:36:20,515: MainThread Missing vbuckets to be recovered:[

      {"node": "ns_1@10.3.3.61", "vbuckets": [513, 514, 515, 516, 517, 518, 519, 520, 521, 522, 523, 524, 525, 526, 527, 528, 529, 530, 531, 532, 533, 534, 535, 536, 537, 538, 539, 540, 541, 542, 543, 544, 545, 546, 547, 548, 549, 550, 551, 552, 553, 554, 555, 556, 557, 558, 559, 560, 561, 562, 563, 564, 565, 566, 567, 568, 569, 570, 571, 572, 573, 574, 575, 576, 577, 578, 579, 580, 581, 582, 583, 584, 585, 586, 587, 588, 589, 590, 591, 592, 593, 594, 595, 596, 597, 598, 599, 600, 601, 602, 603, 604, 605, 606, 607, 608, 609, 610, 611, 612, 613, 614, 615, 616, 617, 618, 619, 620, 621, 622, 623, 624, 625, 626, 627, 628, 629, 630, 631, 632, 633, 634, 635, 636, 637, 638, 639, 640, 641, 642, 643, 644, 645, 646, 647, 648, 649, 650, 651, 652, 653, 654, 655, 656, 657, 658, 659, 660, 661, 662, 663, 664, 665, 666, 667, 668, 669, 670, 671, 672, 673, 674, 675, 676, 677, 678, 679, 680, 681, 682, 854, 855, 856, 857, 858, 859, 860, 861, 862, 863, 864, 865, 866, 867, 868, 869, 870, 871, 872, 873, 874, 875, 876, 877, 878, 879, 880, 881, 882, 883, 884, 885, 886, 887, 888, 889, 890, 891, 892, 893, 894, 895, 896, 897, 898, 899, 900, 901, 902, 903, 904, 905, 906, 907, 908, 909, 910, 911, 912, 913, 914, 915, 916, 917, 918, 919, 920, 921, 922, 923, 924, 925, 926, 927, 928, 929, 930, 931, 932, 933, 934, 935, 936, 937, 938, 939, 940, 941, 942, 943, 944, 945, 946, 947, 948, 949, 950, 951, 952, 953, 954, 955, 956, 957, 958, 959, 960, 961, 962, 963, 964, 965, 966, 967, 968, 969, 970, 971, 972, 973, 974, 975, 976, 977, 978, 979, 980, 981, 982, 983, 984, 985, 986, 987, 988, 989, 990, 991, 992, 993, 994, 995, 996, 997, 998, 999, 1000, 1001, 1002, 1003, 1004, 1005, 1006, 1007, 1008, 1009, 1010, 1011, 1012, 1013, 1014, 1015, 1016, 1017, 1018, 1019, 1020, 1021, 1022, 1023]}

      ]

      • say we run cbrecovery against a specific vbucketID, for example 623:

      Items from just 623 loaded, but the rest of the vbuckets are still processed but skipped

      • dry run after this shows:

      /opt/couchbase/bin/cbrecovery -u Administrator -p password -U Administrator -P password http://localhost:8091 http://10.3.3.61:8091 -n
      2013-04-24 15:44:01,474: MainThread Error:No recovery needed
      error: unable to access REST API: 10.3.3.61:8091/pools/default/buckets/default/controller/startRecovery; please check source URL, username (-u) and password (-p); response: 400; reason: start_recovery

      <Although item count is inconsistent>

      • - - - -

      --> So I guess, we'll need to ignore all the other vbuckets when we run cbrecovery against a specific vbucket, and basically not update the entire missing vbucket list.

      --> Perhaps changing the error message as well when cbrecovery is not required would help.

      --> Also, when just ./cbrecovery is run, right now we get an error message: error: please provide both a cluster to recover from and a cluster to recover to,
      how about having this default to "-h", and display usage and all available options.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            abhinav Abhi Dangeti
            abhinav Abhi Dangeti
            Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

              Created:
              Updated:
              Resolved:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty