During investigation of slow background fetches in a customer environment, correlation was found with compaction runs. However currently there's limited insight into what compaction is doing from our log files - for example all we see the start time and end time:
It would assist in assessing the impact of compaction if additional details could be logged - for example:
- Size of the input file
- Fragmentation ratio of the input file / size of useful data in the input file.
- Size of the output file
- Number of documents copied into the new file
- Number of documents discarded due to expiration during compaction
- Number of tombstones purged during compaction
- Sequence number which we purged up to.
If the information is already recorded by couchstore then we just need to add it to our logging in kv_engine / couch_compact; if not then further work may be required in couchstore to expose them.