Details
-
Bug
-
Resolution: Not a Bug
-
Critical
-
Cheshire-Cat
-
Enterprise Edition 7.0.0 build 4532 ‧ IPv4 © 2021 Couchbase, Inc.
-
Untriaged
-
Centos 64-bit
-
1
-
No
Description
Script to Repro
guides/gradlew --refresh-dependencies testrunner -P jython=/opt/jython/bin/jython -P 'args=-i /tmp/durability_volume.ini -t volumetests.Collections.volume.test_volume_taf,nodes_init=6,bucket_spec=volume_templates.buckets_scalable_stats_for_volume_test,iterations=1,rerun=False,get-cbcollect-info=True,skip_validations=False,services_for_rebalance_in=kv:index,services_init=kv-n1ql-n1ql-kv:index-kv:index-kv:index,number_of_indexes=500,scrape_interval=5,quota_percent=80'
|
We were running scalable stats volume test which has:
30 buckets
~1000 collections
~1000 scopes
~1500 indexes
15 XDCR relationships
running with scrape interval of 1s.
From the config we can see that storage.tsdb.retention.size is set to 1024MB. However I have noticed this going to as high as 3.2GB.
couchba+ 74264 74258 34 Feb25 ? 02:35:08 /opt/couchbase/bin/prometheus --config.file /opt/couchbase/var/lib/couchbase/config/prometheus.yml --web.enable-admin-api --web.enable-lifecycle --storage.tsdb.retention.size 1024MB --storage.tsdb.retention.time 365d --web.listen-address 0.0.0.0:9123 --storage.tsdb.max-block-duration 25h --storage.tsdb.path /opt/couchbase/var/lib/couchbase/stats_data --log.level debug --query.max-samples 200000 --storage.tsdb.no-lockfile
|
It looks like https://github.com/prometheus/prometheus/issues/5771. However this bug seems to have been fixed a few years ago.