Details
-
Bug
-
Resolution: Fixed
-
Critical
-
1.6.5, 1.7.2
-
Security Level: Public
Description
Can't find the exact bug for this (other bugs also mention moxi memory leak, and might be the same, but they don't mention haproxy).
Reproduced this customer reported issue, and there's a quick config workaround that can slow the leak.
More info:
After spinning up a 20 node cluster, with haproxy, valgrind, and a special debug build of moxi, using a configuration similar to XXX's, I was able to reproduce a significant memory leak in moxi. It occurs during topology changes, or, when moxi thinks there's a cluster topology change. Other customers probably never noticed, since topology changes are usually infrequent.
Additionally, XXX's use of haproxy, in roundrobin load-balancing configuration significantly exacerbated the bug/leak in moxi. (I recall Tim had another report of a moxi mem leak from another customer. Perhaps they're also using haproxy?)
Here's XXX's haproxy configuration...
-----------------
global
log 127.0.0.1 local2
chroot /var/lib/haproxy
pidfile /var/run/haproxy.pid
maxconn 4000
user haproxy
group haproxy
daemon
defaults
mode http
log global
option dontlognull
option httpclose
option httplog
option forwardfor
timeout connect 10000
timeout client 300000
timeout server 300000
maxconn 60000
retries 3
stats enable
stats uri /haproxy-status
stats refresh 5s
frontend moxi *:8092
default_backend moxi
backend moxi
balance roundrobin
server node1 10.80.68.152:8091 check
server node2 10.80.68.178:8091 check
server node3 10.80.68.146:8091 check
server node4 10.80.68.166:8091 check
server node5 10.80.68.154:8091 check
server node6 10.80.68.158:8091 check
server node7 10.80.68.156:8091 check
server node8 10.80.68.160:8091 check
server node9 10.80.68.162:8091 check
server node10 10.80.68.144:8091 check
server node11 10.80.68.170:8091 check
server node12 10.80.68.174:8091 check
server node13 10.80.68.164:8091 check
server node14 10.80.68.168:8091 check
server node15 10.80.68.150:8091 check
server node16 10.80.68.148:8091 check
server node17 10.80.68.176:8091 check
server node18 10.80.68.172:8091 check
-----------------
The workaround to reduce the leak includes...
= change from haproxy's 'balance roundrobin' to some other load balancing choice.
For example, when I instead used 'balance source' instead of 'balance roundrobin' in my haproxy configuration, the leak went away. (Caveat: it went away until I did an actual real topology change.)
The underlying issue is moxi's doing a simple string comparison to decide whether the topology has changed. And, every node in a cluster gives a slightly different answer as to the topology. When moxi thinks the topology has changed, moxi will tear-down its data structures and dynamically reconfigure, and there's a leak there somewhere.
Normally, moxi, expects its HTTP/REST connection to be very long lived. However, when haproxy's in the middle, the haproxy might decide to timeout a HTTP connection that's still open but hasn't been doing anything. (e.g, the HTTP/REST connection hasn't been doing anything because there's no topology change). This leads to the second haproxy config workaround suggestion...
= increase haproxy's timeouts
XXX's currently using 5 minute timeouts (in millisecs)...
timeout client 300000
timeout server 300000
So, every 5 minutes, haproxy times out the connection and closes it. moxi sees the closed HTTP/REST connection and tries again. haproxy will choose the next server node on its list (since haproxy is in 'balance roundrobin' configuration). That next server node will return a slightly different topology answer. Then moxi (because it's doing simple string comparison) will inadvertently think the topology configuration has changed (when it actually hasn't), exposing the leak.
This was with haproxy 1.4.20.
Attachments
For Gerrit Dashboard: MB-4896 | ||||||
---|---|---|---|---|---|---|
# | Subject | Branch | Project | Status | CR | V |
13939,1 | MB-4896 - Fix memory leak during dynamic reconfiguration. | master | moxi | Status: MERGED | +2 | +1 |