The logging [ADDED] was put into a source build to highlight this issue in file: eventing/supervisor/super_supervisor.go normal pause 2020-02-06T17:13:29.719-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 begin pausing process 2020-02-06T17:13:30.740-08:00 [Error] Producer::handleV8Consumer [ts111:0] Accept failed in main loop, err: accept unix /tmp/127.0.0.1:8091_0_1873727481.sock: use of closed network connection 2020-02-06T17:13:30.740-08:00 [Error] Producer::handleV8Consumer [ts111:0] Accept failed in feedback loop, err: accept unix /tmp/f_127.0.0.1:8091_0_1873727481.sock: use of closed network connection 2020-02-06T17:13:30.740-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 pausing done 2020-02-06T17:13:40.740-08:00 [Error] Supervisor::New worker_ts111_0: Service consumer_client => app: ts111 workerName: worker_ts111_0 tcpPort: /tmp/127.0.0.1:8091_0_1873727481.sock ospid: 0 failed to terminate in a timely manner normal resume - it is actually done via a deploy (note the 10 second delay between final 2 messages) 2020-02-06T17:13:57.028-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 begin deployment process 2020-02-06T17:14:12.616-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 deployment done pause then quick resume (within the 10 second message gap of normal case at the top) 2020-02-06T17:15:08.982-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 begin pausing process 2020-02-06T17:15:11.003-08:00 [Error] Producer::handleV8Consumer [ts111:0] Accept failed in main loop, err: accept unix /tmp/127.0.0.1:8091_0_1873727481.sock: use of closed network connection 2020-02-06T17:15:11.003-08:00 [Error] Producer::handleV8Consumer [ts111:0] Accept failed in feedback loop, err: accept unix /tmp/f_127.0.0.1:8091_0_1873727481.sock: use of closed network connection 2020-02-06T17:15:14.484-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 begin deployment process 2020-02-06T17:15:21.003-08:00 [Error] Supervisor::New worker_ts111_0: Service consumer_client => app: ts111 workerName: worker_ts111_0 tcpPort: /tmp/127.0.0.1:8091_0_1873727481.sock ospid: 0 failed to terminate in a timely manner 2020-02-06T17:16:14.489-08:00 [Error] Supervisor::New ts111: Service consumer => app: ts111 name: worker_ts111_0 tcpPort: /tmp/127.0.0.1:8091_0_1873727481.sock ospid: 0 dcpEventProcessed: v8EventProcessed: failed to terminate in a timely manner 2020-02-06T17:16:24.489-08:00 [Error] Supervisor::New super_supervisor: Service Producer => function: ts111 tcpPort: failed to terminate in a timely manner 2020-02-06T17:16:30.157-08:00 [ADDED] JAS SuperSupervisor::SettingsChangeCallback [1] Function: ts111 deployment done It seems like this race can be solved by delaying the composite_status change form "pausing" to "paused" until after the "Supervisor::New worker_ts111_0: Service consumer_client => app: ..." message. The issue seems to be that func SettingsChangeCallback doesn't know about the cleanup/houskeeping that is still going on it seems like until we see a messages like: 2020-02-06T17:13:40.741-08:00 [Info] Supervisor::Stop Stopping supervision tree, context: worker_ts111_0 2020-02-06T17:13:40.741-08:00 [Info] Consumer::Stop [worker_ts111_0:/tmp/127.0.0.1:8091_0_1873727481.sock:11423] Requested to stop supervisor for Eventing.Consumer. Exiting Consumer::Stop The composite_status or CompositeStatus should stay in the "pausing" state.