Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-37350

Optimise indexing of incremental workloads for insert heavy scenarios

    XMLWordPrintable

Details

    Description

      For insert heavy workloads, if KV engine does not send whether a mutation is insert or update (from consumer perspective), then indexer will have to do a back index disk fetch resulting in significant slow-down. 

      The goal of this improvement is to study if any optimisation can be done in indexer side like using a Counting Bloom filter (or) Cuckoo filter etc. so that indexer can distinguish between a insert or update mutation. This information can help us avoid back index disk fetch. However, this optimisation comes at the cost of more memory foot print and more CPU cycles. The performance impact has to be studied.

      Tagging storage as well because this improvement can be done either at GSI layer (E.g.,  at MutationStreamReader) or at storage layer (E.g. slice level)

      Attachments

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              akhil.mundroy Akhil Mundroy
              varun.velamuri Varun Velamuri
              Votes:
              0 Vote for this issue
              Watchers:
              3 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  There are no open Gerrit changes

                  PagerDuty