Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-61250

KVStore compaction expiry leaving temporary items in hash-table

    XMLWordPrintable

Details

    • Untriaged
    • 0
    • Unknown

    Description

      From a real cluster following was first observed whilst investigating a performance issue. The bucket is 100% resident, but noted that the kv_ep_storedval_num is much higher than kv_curr_items_tot, what are all of these extra StoredValues?

      Later noted in stats.log vbucket-details that the hash-table stats explain this difference, ht_num_temp_items accounts for the huge number of temporaries.

       vb_0:ht_num_deleted_items:                  11318
       vb_0:ht_num_in_memory_items:                396839
       vb_0:ht_num_items:                          396839  <-----
       vb_0:ht_num_temp_items:                     7448340   <----
       vb_0:ht_size:                               393209 <-!!!!!
      

      This causes some further problems.

      • We clearly use more memory than needed.
      • The hash-table is sized based on ht_num_items, yet the hash-table also stores the temporaries.
        • This results in long hash-table chains, in the above case chains of ~20, meaning a hash-table lookup in the worst case is now 20 times worse than the optimal case of 1 item per hash-table bucket (20x latency increase in worst case).
      • Secondly when the hash-table does resize, these temporaries must also be moved, hash-table resizing is much costlier than needed - again increasing latencies.
      • Finally, any hash-table visit task must also visit these temporaries - again increasing latencies (e.g. periodic expiry pager is having to look at 600m items instead of 80m).

      The cause is suspected to be related to expiry driven from compaction and relates to a detail of magma. Magma compaction can see "old" versions of a key, meaning we may see multiple callbacks per key.

      My theory is along these lines.

      I believe we can see this pattern in the cbcollect data as follows.

      The following three charts are plotting the deltas (so we can better see how all of these data points change together) of:

      • Top chart - kv_curr_temp_items - we see about at 13:29 a 100k increase in temporaries
      • Middle chart - compaction bg_fetches completing (the stat I highlighted in completeCompactionExpiryBgFetch)
      • Lower chart - both kv_curr_temp_items and kv_curr_items_tot - this chart allows us to see when temporaries increase but curr_items didn't, and over the cbcollect we could see various points where we clearly accumulate temporaries.

      Less clear is when the initial expiry or delete occurs, the operation which removes k1 from the hash-table leading to the bg-fetch, but we do observe a mix of pager based expiry and compaction expiry minutes before the increase in temporaries.

      Next is to reproduce and fix.

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              ashwin.govindarajulu Ashwin Govindarajulu
              jwalker Jim Walker
              Votes:
              0 Vote for this issue
              Watchers:
              7 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                PagerDuty