Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-50988

Rescheduled Compaction tasks do not obey the concurrency limit

    XMLWordPrintable

Details

    • Bug
    • Resolution: Fixed
    • Critical
    • 7.1.0
    • 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.1.0
    • couchbase-bucket
    • None
    • Untriaged
    • 1
    • Unknown
    • KV 2022-Feb, KV March-22

    Description

      Summary

      Identified during MB-49512, if a compaction task for a given vBucket already is pending and is rescheduled (e.g. due to compaction being scheduled for collection purge), then the Compaction concurrency limit is not enforced.

      An initial attempt was made to address this (https://review.couchbase.org/c/kv_engine/+/170082), however this exposed and magnified existing flaws in how we schedule Compaction tasks - we can end up sleeping an already scheduled Compaction task forever (when trying to limit concurrency). This results in compaction never running for the affected vBuckets, which is particularly problematic when cleaning up dropped collections - see MB-50941.

      That patch has been reverted to avoid having compaction never finish; as that is worse than exceeding the concurrent compaction limit - however that re-opens the original concurrency limit which is now tracked via this MB.

      Details

      Requirements:

      1. Compact a single VBucket with a given set of parameters (purge_ts, purge_seqno, …)
      2. Set an initial delay before compaction starts - to allow cleanup of multiple dropped collections to be coalesced (Compacting immediately would only know to drop the items from the first collection, but we often see drop of multiple in quick succession). Collections drop delay defaults to 5s.
      3. Merge requests to compact the same vBucket, if one is already pending - again to handle the collection drop use-case. Note this also includes the delay field - i.e. so if a second compaction is scheduled for the same VB, we reset the delay back to 5s again so any more can also be merged.
      4. Limit the maximum number of Compaction tasks which can run concurrently per Bucket - to minimise impact on Flusher latency / don’t steal all threads from AuxIO pool. This also implies that when a task finishes we should schedule another one if it is currently sleeping.

      Problem(s):

      1. Prior to https://review.couchbase.org/c/kv_engine/+/170082 (MB-49512: Obey concurrent compaction limit when rescheduling), we did not obey the concurrency limit when re-scheduling Compaction tasks - existing compaction tasks would be scheduled based on the specified delay.
        • This could result in more compactions than desired being scheduled, consuming too many AuxIO threads and potentially taking IO resource from Flushers. The saturation of AuxIO threads contributed to the linked bug - bucket delete was very slow.
          ** Note this was technically racey - we could have an existing Compaction task already running , but the re-schedule specified say a 5s delay. That would result in the existing Compaction task being snooze()’d for 5s, which would mean the next run (if needed) would be delayed by 5s.
 However this was essentially benign, given the delay was only short.
      2. After fixing (1), we “should” limit concurrency, however we are still racy and hence the above race could now result in a task being snoozed forever (due to hitting the concurrency limit).

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              Balakumaran.Gopal Balakumaran Gopal
              drigby Dave Rigby (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              4 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                PagerDuty