Details
-
Bug
-
Resolution: Won't Fix
-
Major
-
None
-
4.6.2
-
None
-
Triaged
-
Unknown
Description
In a couple of customer issues, old dcp connections were not closed when replication got restarted, which caused unnecessary memory consumption and performance issues.
Attachments
Issue Links
- depends on
-
MB-26808 [4.6.5 CLONE MB-26664]- close tcp connection first and do not close dcp streams when dcp nozzle stops
- Closed
-
MB-26810 [4.6.5 CLONE MB-26643]- send noop through mc.TransmitResponse() to allow write timeout
- Closed
-
MB-26811 [4.6.5 CLONE MB-26648]- Add exit path to doStreamClose for the case that downstream clients have exited
- Closed
-
MB-26812 [4.6.5 CLONE MB-26641]- Add timeout to tcp write operations in gomemcached
- Closed
-
MB-26813 [4.6.5 CLONE MB-26647]- Make stuckness check in dcp nozzle more aggressive
- Closed
-
MB-26819 [4.6.5 CLONE MB-26645]- Always stop parts when replication stops
- Closed
-
MB-26821 [4.6.5 CLONE MB-26642]- Enable tcp write timeout in dcp connections
- Closed
-
MB-26823 [4.6.5 CLONE MB-26644]- use seperate locks on feed.vbstreams and feed.closed
- Closed
- is triggering
-
MB-27277 [5.0 CLONE] old dcp connection is not closed when replication restarts
- Closed