Details
-
Bug
-
Resolution: Duplicate
-
Critical
-
5.5.0
-
Untriaged
-
-
Yes
Description
ubuntu longevity - 5.5.0-2065 - rebalance failures were observed - on investigation following memcached crash was found in diag.log:
2018-03-06T04:02:28.005-08:00, ns_log:0:info:message(ns_1@172.23.105.62) - Service 'memcached' exited with status 134. Restarting. Messages:
|
2018-03-06T04:02:27.745289Z CRITICAL /opt/couchbase/bin/../lib/libstdc++.so.6() [0x7f49f923a000+0x8ee81]
|
2018-03-06T04:02:27.745315Z CRITICAL /opt/couchbase/bin/../lib/libstdc++.so.6() [0x7f49f923a000+0x8fbbf]
|
2018-03-06T04:02:27.745323Z CRITICAL /opt/couchbase/bin/memcached() [0x400000+0x6cf1e]
|
2018-03-06T04:02:27.745327Z CRITICAL /opt/couchbase/bin/memcached() [0x400000+0x50971]
|
2018-03-06T04:02:27.745335Z CRITICAL /opt/couchbase/bin/../lib/libevent_core.so.2.1.8() [0x7f49fa056000+0x195ec]
|
2018-03-06T04:02:27.745349Z CRITICAL /opt/couchbase/bin/../lib/libevent_core.so.2.1.8(event_base_loop+0x46f) [0x7f49fa056000+0x1ca3f]
|
2018-03-06T04:02:27.745354Z CRITICAL /opt/couchbase/bin/memcached() [0x400000+0x4efd4]
|
2018-03-06T04:02:27.745365Z CRITICAL /opt/couchbase/bin/../lib/libplatform_so.so.0.1.0() [0x7f49faf52000+0x88a7]
|
2018-03-06T04:02:27.745373Z CRITICAL /lib/x86_64-linux-gnu/libpthread.so.0() [0x7f49fa910000+0x8182]
|
2018-03-06T04:02:27.745406Z CRITICAL /lib/x86_64-linux-gnu/libc.so.6(clone+0x6d) [0x7f49f8959000+0xfa47d]
|
2018-03-06T04:02:28.017-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Control connection to memcached on 'ns_1@172.23.105.62' disconnected: {badmatch,
|
{error,
|
closed}}
|
2018-03-06T04:02:28.213-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Control connection to memcached on 'ns_1@172.23.105.62' disconnected: lost_connection
|
2018-03-06T04:02:32.624-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "HISTORY" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.659-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "STOCK" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.881-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "CUSTOMER" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.905-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "NEW_ORDER" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.910-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "ORDERS" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.911-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "WAREHOUSE" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.938-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "DISTRICT" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:32.955-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "ITEM" loaded on node 'ns_1@172.23.105.62' in 3 seconds.
|
2018-03-06T04:02:33.150-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "ORDER_LINE" loaded on node 'ns_1@172.23.105.62' in 4 seconds.
|
2018-03-06T04:02:37.416-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Bucket "default" loaded on node 'ns_1@172.23.105.62' in 8 seconds.
|
2018-03-06T04:02:39.423-08:00, ns_rebalancer:2:info:message(ns_1@172.23.106.14) - Bad replicators after rebalance:
|
Missing = [{'ns_1@172.23.104.41','ns_1@172.23.105.62',0},
|
{'ns_1@172.23.104.41','ns_1@172.23.105.62',1},
|
{'ns_1@172.23.104.41','ns_1@172.23.105.62',2},
|
{'ns_1@172.23.104.41','ns_1@172.23.105.62',3},
|
{'ns_1@172.23.104.41','ns_1@172.23.105.62',4},
|
...
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',916},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',917},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',918},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',919},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',920},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',921},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',922},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',923},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',924},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',925},
|
{'ns_1@172.23.99.253','ns_1@172.23.105.62',926}]
|
Extras = []
|
2018-03-06T04:02:39.439-08:00, ns_orchestrator:0:critical:message(ns_1@172.23.106.14) - Rebalance exited with reason {child_died,bad_replicas}
|
2018-03-06T04:02:39.730-08:00, ns_memcached:0:info:message(ns_1@172.23.105.83) - Shutting down bucket "HISTORY" on 'ns_1@172.23.105.83' for deletion
|
2018-03-06T04:02:44.566-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Control connection to memcached on 'ns_1@172.23.105.62' disconnected: lost_connection (repeated 6 times)
|
2018-03-06T04:02:44.566-08:00, ns_memcached:0:info:message(ns_1@172.23.105.62) - Control connection to memcached on 'ns_1@172.23.105.62' disconnected: {error,
|
closed} (repeated 1 times)
|
2018-03-06T04:04:55.242-08:00, ns_cluster:5:info:message(ns_1@172.23.106.14) - Failed to add node 172.23.105.83:8091 to cluster. Prepare join failed. Node is already part of cluster.
|
2018-03-06T04:05:07.045-08:00, ns_rebalancer:0:info:message(ns_1@172.23.106.14) - Starting failing over ['ns_1@172.23.106.213']
|
This was not observed in 5.5.0-1979
Attachments
Issue Links
- duplicates
-
MB-28453 memcached exits with status 134 and rebalance failures in centos longevity
- Closed