Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-51242

cbc-pillowfight should have an option to set the time period over which the number of operation is limited

    XMLWordPrintable

Details

    • Improvement
    • Resolution: Unresolved
    • Major
    • None
    • 7.1.0
    • sdkdev
    • None
    • 1

    Description

      What is the suggested improvement?
      The 'cbc-pillowfight' should have an option to set the time period over which the --rate-limit flag limits the number of operations. Right now this is set to 1 second and there is no workaround that I can think of to achieve the desired functionality of being able to, for example, limit the number of operations to 30000 sets/10 seconds.

      Context
      The tools team is using 'cbc-pillowfight' to generate mutations for a set of documents to test PiTR backups. PiTR has a setting which controls how often snapshots of the data are taken (granularity) so in our test we want to be able to get one mutation per document per one granularity period (we want to avoid de-duplication inside of the snapshots and have fine-grained control of how many actual mutations are done over time).

      Since we can only limit the number of operations/second we need to either account for de-duplication (if granularity is set to 3 seconds and we manage to update all documents once every second, then we get 3 mutations/snapshot and, as a result of de-duplication, only the latest one is included) or update only a subset of documents every second (which is logically difficult on our end).

      Workaround
      If there is a workaround with the existing flags, please, let me know. As an example, what we want is

      Do N number of mutations for the same set of N documents every M seconds (where N and M are configurable variables)
      

      As a toy practical example, we currently do

      cbc-pillowfight -U localhost -u Administrator -P asdasd -B 15000 -I 15000 --num-cycles 60 --rate-limit 15000 -m 128 -M 128 -r 100 -R
      

      to get 15000 mutations for 15000 documents for 30 seconds with granularity set to 3 seconds (so we are doing 2 sets for every granularity period instead of the desired 1).
      The reason why this is an issue is because achieving 15000 sets every second is not possible on some of the testing machines we are using.

      Attachments

        No reviews matched the request. Check your Options in the drop-down menu of this sections header.

        Activity

          People

            avsej Sergey Avseyev
            maks.januska Maksimiljans Januska
            Votes:
            0 Vote for this issue
            Watchers:
            1 Start watching this issue

            Dates

              Created:
              Updated:

              Gerrit Reviews

                There are no open Gerrit changes

                PagerDuty