Details
-
Bug
-
Resolution: Fixed
-
Major
-
6.5.0
-
Untriaged
-
Unknown
Description
The StreamsMap in the DcpProducer is a custom AtomicUnorderedMap implementation. This is basically a wrapper around an std::unordered_map that guards every operation with a cb::RWLock. This RWLock is a single bottleneck for every DCP item we send (we acquire the read lock in DcpProducer::getNextItem()), and for every front end operation that results in a new seqno (set/replace/delete etc. via DcpProducer::notifySeqnoAvailable(...)). This introduces a cache contention issue as the RWLock implementation will have to write to the readers field of the underlying read write lock to acquire it, and subsequently to release it.
https://issues.couchbase.com/browse/MB-32107?focusedCommentId=316320&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-316320
https://issues.couchbase.com/browse/MB-32107?focusedCommentId=317274&page=com.atlassian.jira.plugin.system.issuetabpanels%3Acomment-tabpanel#comment-317274
This could be fixed in a couple of different ways; by creating a sparse map and using the StreamContainer lock which would bloat the size of the object, or by creating a sharded map with multiple rwlocks which would be non-trivial. Folly has a ConcurrentHashMap class that uses hazard pointers to ensure reads are completely lock free, and shards the map. The sharding should reduce cache contention on a producer significantly.
As folly would be useful in many other places in kv engine and there is no trivial solution to this performance issue, we should fix this by using folly's ConcurrentHashMap.
Attachments
Issue Links
- has to be done after
-
MB-30040 Introduce Facebook Folly library
- Closed