This issue is just the way performance lands with the fair scheduling fix of MB-18453.
As you may know ep-engine has a multi-threaded tasking model. A fixed number of threads are created and assigned to one of 4 task types (reader/writer/nonio/auxio) and then tasks can be created and scheduled to run.
We’ve updated our documents for the tasking model in the ep-engine README.md, which has more details.
With MB-18453 we addressed a problem where the scheduler "wake" function scheduled the task straight into the "readyQueue". The “readyQueue” is the queue that running threads get work from (they pop the queue), however the “readyQueue” is ordered by a tasks priority. So when enqueueing a high priority task to the “readyQueue” it goes before any lower priority tasks.
This was the trigger of the now well known and commonly seen “NONIO task waiting” problem that’s caused many rebalances to fail. In those instances typically two tasks (ConnNotifier and Processor) that are associated with DCP get woken (via the broken wake function) as traffic arrives on a node. The DCP tasks are high-priority (e.g. Processor has the highest priority available) and jump ahead of other tasks. Every mutation landing on the node via DCP could be causing these tasks to jump to the front of the queue and the result is that some low priority tasks, critical to rebalance are held in the queue for a long time. In fact they've been seen to be held up for hours.
The fix added in MB-18453 was to never enque tasks directly into the “readyQueue”, we always enqueu them into the “futureQueue” which is ordered by the time the task should execute. The worker threads which are draining the “readyQueue” only ever re-fill their “readyQueue” when their “readyQueue” is empty, that's when they look at the "futureQueue" and move all tasks which need to be executed. Thus we never get into the starvation problem as everyone gets a fair go.
But…
In this fairer world sometimes DCP has to wait its turn and thus the 95th percentile of observe has gone up and this performance change is more noticeable on low-core count systems. In systems with more cores, if say the NONIO tasks are very busy (because DCP is running hard), we’ll are able to drain the queues faster as the NONIO threads can all be on-CPU concurrently and hence the impact of the fairer scheduling is less obvious.
I hope this makes sense, any questions welcome.
The following playbook is used to initialize test machines (Ubuntu 16.04 is usually used):
https://github.com/couchbase/perfrunner/blob/master/playbooks/clients.yml
Obviously, tools such as htop are installed for debugging purposes.
"make build" creates virtual environment and installs all required python packages.
I believe the same process should work for "perfSanity" as well.