Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-4461

replication cursor stuck from slave 1 to slave 2 , hence high number of checkpoint items in slave1

    Details

    • Type: Bug
    • Status: Closed
    • Priority: Critical
    • Resolution: Fixed
    • Affects Version/s: 1.7.2
    • Fix Version/s: 2.0-beta
    • Component/s: couchbase-bucket
    • Security Level: Public
    • Labels:
    • Environment:
      Ubuntu 10.04.3 LTS x86_64, Membase 1.7.2, Amazon m1.large instances, 16-node cluster.

      Description

      Two out of sixteen nodes are ejecting active items because their mem_used is above the high water mark. The other nodes are well below. Customer says that keys are of various sizes, but the larger ones should be spread out randomly across the different nodes. Number of keys on all nodes is roughly equal.

      The two problem nodes show ep_value_size much larger than a healthy node. However, looking at the sqlite data files, there's no significant difference in size of the files on disk (as seen, for example, in */membase.log).

      FYI, the rise in data size seems to have started on these two nodes after a different node, 10.254.7.150, stopped responding to REST and membase was restarted (with 'service membase-server restart').

      The mbcollect_info data for these servers are in the S3 . The logs are named:

      membase 16: a good node, for comparison
      membase 07 and membase 14: the trouble nodes that are ejecting items due to large memory usage
      membase 11: the node that was restarted on Saturday

      Can someone please take a look at this, and help me understand why the ep_value_size might be bloating up for these two nodes?

      Thanks,

      Tim

      No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

        Hide
        paf Alexander Petrossian (PAF) added a comment - - edited

        Thanks for great news, Chiyoung!
        One small Q:
        We understand there is no way to learn the state of replication – it should "just work", and that was fixed, right?

        Show
        paf Alexander Petrossian (PAF) added a comment - - edited Thanks for great news, Chiyoung! One small Q: We understand there is no way to learn the state of replication – it should "just work", and that was fixed, right?
        Hide
        chiyoung Chiyoung Seo added a comment -

        Fixed in 1.8 release.

        Show
        chiyoung Chiyoung Seo added a comment - Fixed in 1.8 release.
        Hide
        paf Alexander Petrossian (PAF) added a comment -

        Colleagues, let me thank you for great product!

        We're seeing something similar, was there any progress towards understanding of this?
        Our statistics show that TAP cursors are really stuck
        $ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12
        ..
        eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1879177
        eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58613
        ..
        $ --few minutes passed--
        $ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12
        ..
        eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1882715
        eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58644
        ..

        Only these two statistical rows were changed.

        Is there any way to prod replication?
        Or any way to learn it's state?

        Show
        paf Alexander Petrossian (PAF) added a comment - Colleagues, let me thank you for great product! We're seeing something similar, was there any progress towards understanding of this? Our statistics show that TAP cursors are really stuck $ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12 .. eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1879177 eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58613 .. $ -- few minutes passed -- $ /opt/membase/bin/mbstats 10.112.119.11:11210 tap GXH teligent|grep -E 10.112.119.12 .. eq_tapq:replication_ns_1@10.112.119.12:total_backlog_size: 1882715 eq_tapq:replication_ns_1@10.112.119.12:total_noops: 58644 .. Only these two statistical rows were changed. Is there any way to prod replication? Or any way to learn it's state?
        Hide
        perry Perry Krug added a comment -

        One thing that I had a question on regarding this. Is the cursor expected to be stuck completely, or will it eventually clear itself out? At the customer, we are seeing everything eventually resolve itself...I just want to make sure we're looking at the same issue.

        Show
        perry Perry Krug added a comment - One thing that I had a question on regarding this. Is the cursor expected to be stuck completely, or will it eventually clear itself out? At the customer, we are seeing everything eventually resolve itself...I just want to make sure we're looking at the same issue.
        Hide
        dipti Dipti Borkar added a comment -

        Post 1.8.0 Hot fix.

        Show
        dipti Dipti Borkar added a comment - Post 1.8.0 Hot fix.
        Hide
        TimSmith Tim Smith (Inactive) added a comment -

        For the record, this has shown up on 1.7.2, not 1.7.1 as stated in the description.

        Tim

        Show
        TimSmith Tim Smith (Inactive) added a comment - For the record, this has shown up on 1.7.2, not 1.7.1 as stated in the description. Tim
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        from Chiyoung:

        Keeping 2 checkpoints only applies to the master node. The slave node cannot create its new checkpoint for itself, but instead will receive a checkpoint_start (_end) messages from the master.

        In this case, cluster has two replicas (master A -> slave B -> slave C), and the replication cursor for C on B got stuck and didn't move forward.

        Show
        farshid Farshid Ghods (Inactive) added a comment - from Chiyoung: Keeping 2 checkpoints only applies to the master node. The slave node cannot create its new checkpoint for itself, but instead will receive a checkpoint_start (_end) messages from the master. In this case, cluster has two replicas (master A -> slave B -> slave C), and the replication cursor for C on B got stuck and didn't move forward.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        farshid-2:Downloads farshid$ egrep -ir "vb_113" membase14-checkpointstats-201111231920.txt vb_113:last_closed_checkpoint_i: 1323
        vb_113:num_checkpoint_items: 373590
        vb_113:num_checkpoints: 264
        vb_113:num_items_for_persistenc: 4
        vb_113:num_tap_cursors: 1
        vb_113:open_checkpoint_id: 1324
        vb_113:persisted_checkpoint_id: 1323
        vb_113:state: replica

        it shows that this node did not close the checkpoint for this vbucket and hence we have 264 open checkpoints in the memory

        replication cursor seems to be stuck in the replica node.

        Show
        farshid Farshid Ghods (Inactive) added a comment - farshid-2:Downloads farshid$ egrep -ir "vb_113" membase14-checkpointstats-201111231920.txt vb_113:last_closed_checkpoint_i: 1323 vb_113:num_checkpoint_items: 373590 vb_113:num_checkpoints: 264 vb_113:num_items_for_persistenc: 4 vb_113:num_tap_cursors: 1 vb_113:open_checkpoint_id: 1324 vb_113:persisted_checkpoint_id: 1323 vb_113:state: replica it shows that this node did not close the checkpoint for this vbucket and hence we have 264 open checkpoints in the memory replication cursor seems to be stuck in the replica node.
        Hide
        farshid Farshid Ghods (Inactive) added a comment -

        Tim,

        I will have a look at the diags tomorrow morning but it would be helpful to get these info from the customer:

        1- number of items per node
        2- mem_used on those nodes that are not ejecting active items
        3- current state of the cluster after it did flish out the active items

        Show
        farshid Farshid Ghods (Inactive) added a comment - Tim, I will have a look at the diags tomorrow morning but it would be helpful to get these info from the customer: 1- number of items per node 2- mem_used on those nodes that are not ejecting active items 3- current state of the cluster after it did flish out the active items

          People

          • Assignee:
            mikew Mike Wiederhold
            Reporter:
            TimSmith Tim Smith (Inactive)
          • Votes:
            0 Vote for this issue
            Watchers:
            4 Start watching this issue

            Dates

            • Created:
              Updated:
              Resolved:

              Gerrit Reviews