Description
When orchestrator is being ejected from the cluster, the "GET /pools/default" output can still contain that node for some time even when /tasks already returns rebalance status=notRunning.
Scenario:
1. Create 2 node cluster
2. Say node 1 is orchestrator, rebalance it out by calling /controller/rebalance
3. Poll GET /tasks until rebalance status becomes "notRunning"
4. Call GET /pools/default, the output can contain the ejected node
It is not a big deal for humans but when it is done programmatically it can ruin the internal logic of the script, and basically make users to implement two polls instead of one: /tasks and then /pools/default, which is pretty annoying.
Not sure what was the main purpose of the /tasks api though. Please feel free to close this ticket if it is not a bug and there is another (consistent) way to determine rebalance finish programmatically.