Uploaded image for project: 'Couchbase Server'
  1. Couchbase Server
  2. MB-44539

Investigate bucket/collection differences in compaction tests

    XMLWordPrintable

Details

    • 1
    • KV-Engine 2021-March

    Description

      For our compaction time tests there are some differences seen when comparing default collection (bucket) to 1 scope 1 collection, 1 scope 1000 collections, and 1000 scopes 1000 collections.

      Showfast page for all the tests: 

      http://showfast.sc.couchbase.com/#/timeline/Linux/kv/compact/all

      Note: default collection (bucket) test uses python sdk2 and the rest of the tests use python sdk3. There is a higher in the number of connection made per client in sdk3.

       

      default vs. 1 scope 1 collection:

      graph comparison - http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=athena_700-4502_compact_e699&label=default&snapshot=athena_700-4502_compact_cd05&label=1s1c

      1 - mem_used and ep_meta_data_memory in 1 collection case is higher than for default and trends lower, whereas default trends higher throughout the test

      2 - vb_active_resident_items_ratio is 25% higher in 1 collection case

      3 - vb_replica_resident_items_ratio starts higher and trends lower in 1 collection case, whereas default trends higher

      4 - avg_disk_commit_time, avg_disk_update_time, vb_avg_total_queue_age are higher for all percentiles in 1 collection case

      5 - couch_docs_actual_disk_size and couch_docs_fragmentation starts 25% higher in 1 collection case

      6 - memcached_rss 15% higher in 1 collection case

      7 - Cached (memory in pagecache) is 5% lower in 1 collection case

       

      default vs 1 scope 1000 collections:

      graph comparison - http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=athena_700-4502_compact_e699&label=default&snapshot=athena_700-4502_compact_a7b7&label=1s1000c

      1 - vb_active_resident_items_ratio is 100% higher in 1 scope 1000 collection case

      2 - vb_replica_resident_items_ratio is less than 1% (near 0%)  in 1 scope 1000 collection case, whereas default is around 5%

      3 - couch_docs_actual_disk_size is 40% higher in 1 scope 1000 collection case

      4 - couch_docs_fragmentation is 50% higher in 1 scope 1000 collection case

      5 - beam.smp_rss is 250% higher in 1 scope 1000 collection case

      6 - beam.smp_cpu is 600% higher in 1 scope 1000 collection case

      7 - memcached_cpu is 25% higher in 1 scope 1000 collection case

       

      default vs 1000 scopes 1000 collections:

      graph comparison - http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=athena_700-4502_compact_e699&label=default&snapshot=athena_700-4502_compact_0315&label=1000s1000c

       

      1 scope 1000 collections vs. 1000 scopes 1000 collections:

      graph comparison - http://cbmonitor.sc.couchbase.com/reports/html/?snapshot=athena_700-4502_compact_a7b7&label=1s1000c&snapshot=athena_700-4502_compact_0315&label=1000s1000c

      1 - avg_disk_update_time is higher for all percentiles in 1000 scope 1000 collection case.

      2 - beam.smp_rss is 10% higher in 1000 scope 1000 collection case

      3 - beam.smp_cpu is 70% higher in 1000 scope 1000 collection case

       

       

       

      Attachments

        1. default_4_runs_active_rr.png
          default_4_runs_active_rr.png
          52 kB
        2. default_4_runs_memused.png
          default_4_runs_memused.png
          58 kB
        3. default_4_runs_replica_rr.png
          default_4_runs_replica_rr.png
          57 kB
        4. default_6_runs_active_rr.png
          default_6_runs_active_rr.png
          53 kB
        5. default_6_runs_mem_used.png
          default_6_runs_mem_used.png
          57 kB
        6. default_6_runs_replica_rr.png
          default_6_runs_replica_rr.png
          57 kB
        7. default_all_nodes_memory.png
          default_all_nodes_memory.png
          261 kB
        8. s1c1_4_runs_active_rr.png
          s1c1_4_runs_active_rr.png
          52 kB
        9. s1c1_4_runs_active_rr-correct-graph.png
          s1c1_4_runs_active_rr-correct-graph.png
          54 kB
        10. s1c1_4_runs_mem_used.png
          s1c1_4_runs_mem_used.png
          57 kB
        11. s1c1_4_runs_mem_used-correct-graph.png
          s1c1_4_runs_mem_used-correct-graph.png
          61 kB
        12. s1c1_4_runs_replica_rr.png
          s1c1_4_runs_replica_rr.png
          56 kB
        13. s1c1_4_runs_replica_rr-correct-graph.png
          s1c1_4_runs_replica_rr-correct-graph.png
          59 kB
        14. s1c1_all_nodes_memory.png
          s1c1_all_nodes_memory.png
          239 kB
        15. s1c1_vs_defaul_replica_rr.png
          s1c1_vs_defaul_replica_rr.png
          43 kB
        16. s1c1_vs_default_5_runs_active_rr.png
          s1c1_vs_default_5_runs_active_rr.png
          48 kB
        17. s1c1_vs_default_5_runs_mem_used.png
          s1c1_vs_default_5_runs_mem_used.png
          55 kB
        18. s1c1_vs_default_5_runs_replica_rr.png
          s1c1_vs_default_5_runs_replica_rr.png
          57 kB
        19. s1c1_vs_default_active_rr.png
          s1c1_vs_default_active_rr.png
          47 kB
        20. s1c1_vs_default_mem_used.png
          s1c1_vs_default_mem_used.png
          47 kB
        21. Screenshot 2021-02-24 at 16.38.30.png
          Screenshot 2021-02-24 at 16.38.30.png
          76 kB
        22. Screenshot 2021-02-24 at 16.40.50.png
          Screenshot 2021-02-24 at 16.40.50.png
          129 kB
        23. Screenshot 2021-02-24 at 16.43.17.png
          Screenshot 2021-02-24 at 16.43.17.png
          163 kB

        Issue Links

          No reviews matched the request. Check your Options in the drop-down menu of this sections header.

          Activity

            People

              korrigan.clark Korrigan Clark (Inactive)
              korrigan.clark Korrigan Clark (Inactive)
              Votes:
              0 Vote for this issue
              Watchers:
              6 Start watching this issue

              Dates

                Created:
                Updated:
                Resolved:

                Gerrit Reviews

                  PagerDuty