Details
-
Bug
-
Resolution: Unresolved
-
Major
-
7.6.0, 7.0.0, 7.0.1, 7.0.2, 7.0.3, 7.0.4, 7.1.4, 7.0.5, 7.1.0, 7.1.1, 7.1.2, 7.2.0, 7.1.3, 7.2.1, 7.1.5, 7.2.4, 7.0.6, 7.2.2, 7.1.6, 7.2.3, 7.2.5, 7.6.1
-
Untriaged
-
0
-
Unknown
-
March-June 24
Description
Running many DCP Stream request's which all required backfill had some observable performance issues, the command itself (currently a frontend op) exceeded the 500ms slow-threshold (even logging upto 1s).
Reviewing the code and whereas it's not clear where time went, it is clear that the std::mutex BackfillManager::lock is held for calls into ExecutorPool
E.g. A simple example is
void BackfillManager::wakeUpTask() {
|
std::lock_guard<std::mutex> lh(lock);
|
if (managerTask) {
|
ExecutorPool::get()->wake(managerTask->getId());
|
}
|
}
|
But other's exist in for example schedule which is being called heavily from DCP stream request path.
There's no evidence to say ExecutorPool code was slow to return, but best practice is to minimise the locking scope.
For example the following is what we need
void BackfillManager::wakeUpTask() {
|
std::lock_guard<std::mutex> lh(lock);
|
if (!managerTask) {
|
return;
|
}
|
const auto id = managerTask->getId();
|
lh.unlock();
|
ExecutorPool::get()->wake(id);
|
}
|
Attachments
Gerrit Reviews
For Gerrit Dashboard: MB-61627 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
209208,3 | MB-61627: Reduce locking scope in BackfillManager | neo | kv_engine | Status: NEW | +2 | -1 |