Details
-
Bug
-
Resolution: Unresolved
-
Major
-
None
-
None
-
None
-
None
-
1
Description
I can reproduce this on the integration tests in python. This is a 3 node cbdyncluster, where all 3 nodes have kv, and one node has the rest of the services. The json I get back (which comes from a call to sub_respping_value looks like this:
{'config_rev': 85, 'id': '0x1c19160', 'sdk': 'libcouchbase/3.0.0_beta3_28_g6bfb732d2c PYCBC/3.0.0b3.dev28+g6bfb732', 'services': {'fts': [{'id': '0x1c69f10', 'latency_us': 1745, 'local': '172.23.120.155:50501', 'remote': '172.23.111.136:8094', 'status': 'ok'}], 'kv': [{'id': '0x1c18e50', 'latency_us': 914, 'local': '172.23.120.155:45417', 'remote': '172.23.111.136:11210', 'scope': 'default', 'status': 'ok'}, {'id': '0x1c6f060', 'latency_us': 15721, 'local': '172.23.120.155:38952', 'remote': '172.23.111.135:11210', 'scope': 'default', 'status': 'timeout'}, {'latency_us': 15749, 'remote': '172.23.111.134:11210', 'scope': 'default', 'status': 'timeout'}], 'n1ql': [{'id': '0x1c51440', 'latency_us': 14951, 'local': '172.23.120.155:55357', 'remote': '172.23.111.136:8093', 'status': 'ok'}, {'id': '0x1c6b490', 'latency_us': 27227, 'local': '172.23.120.155:44537', 'remote': '172.23.111.136:8095', 'status': 'ok'}], 'views': [{'id': '0x1c3c670', 'latency_us': 53900, 'local': '172.23.120.155:50331', 'remote': '172.23.111.134:8092', 'status': 'ok'}, {'id': '0x1c689a0', 'latency_us': 58340, 'local': '172.23.120.155:45773', 'remote': '172.23.111.136:8092', 'status': 'ok'}, {'id': '0x1c4fe90', 'latency_us': 59872, 'local': '172.23.120.155:58665', 'remote': '172.23.111.135:8092', 'status': 'ok'}]}, 'version': 1} |
You can see the timeouts above. This is with the default timeouts - and this returns quickly. AFAIK, you can't even set a timeout for ping - no lcb_cmdping_timeout method.
This is a 6.5 cluster with dev preview on. But oddly, I cannot reproduce this locally. So I'm reporting it but I'm not entirely sure why I cannot repro locally on a vagrant cluster. Perhaps I have to configure the same with kv on every node and everything else on just one of the nodes? Have not tried that yet.
Note that this makes a wait_until_ready call sorta impossible to implement.