Details
-
Bug
-
Resolution: Fixed
-
Critical
-
5.0.0
-
None
-
-
Untriaged
-
Centos 64-bit
-
Unknown
Description
ability to upsert the xattt key with any names depends on the keys already set in xattrs
here is some tests that shows it
a) upsert 'integer' - OK
b) upsert 'start_end_extra' & 'integer' - OK
c) upsert 'integer_extra' & 'integer' - FAILED
def test_upsert_order(self): |
k = 'xattr' |
|
self.client.upsert(k, {}) |
rv = self.client.mutate_in(k, SD.upsert('integer', 2, xattr=True)) |
self.assertTrue(rv.success) |
|
self.client.delete(k) |
self.client.upsert(k, {}) |
rv = self.client.mutate_in(k, SD.upsert('start_end_extra', 1, xattr=True)) |
self.assertTrue(rv.success) |
rv = self.client.mutate_in(k, SD.upsert('integer', 2, xattr=True)) |
self.assertTrue(rv.success) |
|
self.client.delete(k) |
self.client.upsert(k, {}) |
rv = self.client.mutate_in(k, SD.upsert('integer_extra', 1, xattr=True)) |
self.assertTrue(rv.success) |
rv = self.client.mutate_in(k, SD.upsert('integer', 2, xattr=True)) # FAILED |
self.assertTrue(rv.success) |
or upsert some different xattr keys - it does not depend on the names!
self.client.upsert(k, {})
|
ok = True
|
|
for key in ('start', 'integer', "in", "int", "double",
|
"for", "try", "as", "while", "else", "end"):
|
try:
|
self.log.info("using key %s" % key)
|
rv = self.client.mutate_in(k, SD.upsert(key, 1,
|
xattr=True))
|
self.assertTrue(rv.success)
|
rv = self.client.lookup_in(k, SD.get(key, xattr=True))
|
self.assertTrue(rv.exists(key))
|
self.assertEqual(1, rv[key])
|
self.log.info("successfully set xattr with key %s" % key)
|
except Exception as e:
|
ok = False
|
self.log.info("unable to set xattr with key %s" % key)
|
print e
|
self.assertTrue(ok, "unable to set xattr with some name. See logs above") # FAILED
|
logs
./testrunner -i andrei.ini use_sdk_client=True,xattr=True,GROUP1=P1 -t subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest.test_check_spec_words,skip_cleanup=True
|
|
Test Input params:
|
{'cluster_name': 'andrei', 'conf_file': 'subdoc/py-subdoc-xattr-sdk.conf', 'num_nodes': 1, 'skip_cleanup': 'True', 'use_sdk_client': 'True', 'ini': 'andrei.ini', 'case_number': 1, 'GROUP1': 'P1', 'logs_folder': '/home/andrei/couchbase_src/couchbase/testrunner/logs/testrunner-17-Feb-06_19-32-47/test_1', 'xattr': 'True', 'spec': 'py-subdoc-xattr-sdk'}
|
Run before suite setup for subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest.test_check_spec_words
|
test_check_spec_words (subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest) ... 2017-02-06 19:32:47 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 172.23.106.88 with username:root password:couchbase ssh_key:
|
2017-02-06 19:32:50 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 172.23.106.88
|
2017-02-06 19:33:02 | INFO | MainProcess | test_thread | [rest_client.get_nodes_version] Node version in cluster 5.0.0-1710-enterprise
|
2017-02-06 19:33:03 | INFO | MainProcess | test_thread | [rest_client.get_nodes_versions] Node versions in cluster [u'5.0.0-1710-enterprise']
|
2017-02-06 19:33:03 | INFO | MainProcess | test_thread | [basetestcase.setUp] ============== basetestcase setup was started for test #1 test_check_spec_words==============
|
2017-02-06 19:33:05 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets [u'default'] on 172.23.106.88
|
2017-02-06 19:33:05 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket default ...
|
2017-02-06 19:33:07 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : default from 172.23.106.88
|
2017-02-06 19:33:07 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete....
|
2017-02-06 19:33:08 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 172.23.106.88 existing buckets : []
|
2017-02-06 19:33:10 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 172.23.106.88:8091
|
2017-02-06 19:33:10 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 172.23.106.88:8091 is running
|
2017-02-06 19:33:10 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED
|
Cluster instance shutdown with force
|
2017-02-06 19:33:11 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster
|
2017-02-06 19:33:13 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:172.23.106.88 port:8091 ssh_username:root, nodes/self: {'ip': u'127.0.0.1', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@127.0.0.1', 'uptime': u'599808', 'mcdMemoryReserved': 3104, 'hostname': u'172.23.106.88:8091', 'storage': [<membase.api.rest_client.NodeDataStorage object at 0x7f787d638790>], 'moxi': 11211, 'port': u'8091', 'version': u'5.0.0-1710-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 327680, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 3281203200, 'memoryTotal': 4069212160, 'memoryQuota': 2069, 'mcdMemoryAllocated': 3104, 'os': u'x86_64-unknown-linux-gnu', 'ports': []}
|
2017-02-06 19:33:13 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=2069
|
2017-02-06 19:33:13 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=forestdb
|
2017-02-06 19:33:14 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 172.23.106.88:8091:username=Administrator&password=password&port=8091
|
2017-02-06 19:33:15 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster
|
2017-02-06 19:33:17 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://172.23.106.88:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
|
2017-02-06 19:33:18 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.44 seconds to create bucket default
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 0
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 1
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 2
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 3
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 4
|
2017-02-06 19:33:18 | WARNING | MainProcess | Cluster_Thread | [task.check] vbucket map not ready after try 5
|
2017-02-06 19:33:18 | INFO | MainProcess | test_thread | [basetestcase.setUp] ============== basetestcase setup was finished for test #1 test_check_spec_words ==============
|
0ms [I0] {25570} [INFO] (instance - L:401) Version=2.7.1_1_g8f2091b, Changeset=8f2091b56b89cda111d5359893d6903df9455229
|
0ms [I0] {25570} [INFO] (instance - L:402) Effective connection string: couchbase://172.23.106.88/default. Bucket=default
|
0ms [I0] {25570} [DEBUG] (instance - L:77) Adding host 172.23.106.88:8091 to initial HTTP bootstrap list
|
0ms [I0] {25570} [DEBUG] (instance - L:77) Adding host 172.23.106.88:11210 to initial CCCP bootstrap list
|
423ms [I0] {25570} [INFO] (instance - L:135) DNS SRV lookup failed: DNS/Hostname lookup failed
|
423ms [I0] {25570} [DEBUG] (confmon - L:83) Preparing providers (this may be called multiple times)
|
423ms [I0] {25570} [DEBUG] (confmon - L:90) Provider CCCP is ENABLED
|
423ms [I0] {25570} [DEBUG] (confmon - L:90) Provider HTTP is ENABLED
|
423ms [I0] {25570} [TRACE] (confmon - L:252) Start refresh requested
|
423ms [I0] {25570} [TRACE] (confmon - L:239) Current provider is CCCP
|
423ms [I0] {25570} [INFO] (cccp - L:144) Requesting connection to node 172.23.106.88:11210 for CCCP configuration
|
423ms [I0] {25570} [DEBUG] (lcbio_mgr - L:416) <172.23.106.88:11210> (HE=0x7f78740211e0) Creating new connection because none are available in the pool
|
423ms [I0] {25570} [DEBUG] (lcbio_mgr - L:321) <172.23.106.88:11210> (HE=0x7f78740211e0) Starting connection on I=0x7f7874038200
|
423ms [I0] {25570} [INFO] (connection - L:450) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Starting. Timeout=2000000us
|
423ms [I0] {25570} [TRACE] (connection - L:267) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Got event handler for new connection
|
423ms [I0] {25570} [TRACE] (connection - L:314) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Scheduling asynchronous watch for socket.
|
642ms [I0] {25570} [TRACE] (connection - L:267) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Got event handler for new connection
|
642ms [I0] {25570} [INFO] (connection - L:116) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Connected
|
642ms [I0] {25570} [DEBUG] (connection - L:123) <172.23.106.88:11210> (SOCK=0x7f78740086c0) Successfuly set TCP_NODELAY
|
642ms [I0] {25570} [DEBUG] (lcbio_mgr - L:271) <172.23.106.88:11210> (HE=0x7f78740211e0) Received result for I=0x7f7874038200,C=0x7f78740086c0; E=0x0
|
642ms [I0] {25570} [DEBUG] (lcbio_mgr - L:223) <172.23.106.88:11210> (HE=0x7f78740211e0) Assigning R=0x7f7874049aa0 SOCKET=0x7f78740086c0
|
642ms [I0] {25570} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f787404fc70,unknown) Pairing with SOCK=0x7f78740086c0
|
1292ms [I0] {25570} [DEBUG] (negotiation - L:378) <172.23.106.88:11210> (SASLREQ=0x7f787404f560) Found feature 0x3 (TCP NODELAY)
|
1292ms [I0] {25570} [DEBUG] (ioctx - L:151) <172.23.106.88:11210> (CTX=0x7f787404fc70,sasl) Destroying. PND=0,ENT=1,SORC=1
|
1292ms [I0] {25570} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f787400cf30,unknown) Pairing with SOCK=0x7f78740086c0
|
1512ms [I0] {25570} [DEBUG] (ioctx - L:151) <172.23.106.88:11210> (CTX=0x7f787400cf30,bc_cccp) Destroying. PND=0,ENT=1,SORC=1
|
1512ms [I0] {25570} [INFO] (lcbio_mgr - L:491) <172.23.106.88:11210> (HE=0x7f78740211e0) Placing socket back into the pool. I=0x7f7874038200,C=0x7f78740086c0
|
1512ms [I0] {25570} [INFO] (confmon - L:153) Setting new configuration. Received via CCCP
|
1512ms [I0] {25570} [DEBUG] (bootstrap - L:56) Instance configured!
|
1512ms [I0] {25570} [DEBUG] (confmon - L:83) Preparing providers (this may be called multiple times)
|
1512ms [I0] {25570} [DEBUG] (confmon - L:90) Provider CCCP is ENABLED
|
1512ms [I0] {25570} [INFO] (lcbio_mgr - L:407) <172.23.106.88:11210> (HE=0x7f78740211e0) Found ready connection in pool. Reusing socket and not creating new connection
|
1512ms [I0] {25570} [DEBUG] (lcbio_mgr - L:223) <172.23.106.88:11210> (HE=0x7f78740211e0) Assigning R=0x7f7874004a80 SOCKET=0x7f78740086c0
|
1512ms [I0] {25570} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f787403bcc0,unknown) Pairing with SOCK=0x7f78740086c0
|
1512ms [I0] {25570} [DEBUG] (server - L:499) <172.23.106.88:11210> (SRV=0x7f78740220d0,IX=0) Setting initial timeout=2499ms
|
2017-02-06 19:33:21 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key start
|
2017-02-06 19:33:21 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key start
|
2017-02-06 19:33:21 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key integer
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key integer
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key in
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] unable to set xattr with key in
|
<RC=0x45[Existing document is not valid JSON], Subcommand failure, Results=1, C Source=(src/callbacks.c,394), OBJ=Spec<DICT_UPSERT, 'in', 262144, 1>>
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key int
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] unable to set xattr with key int
|
<RC=0x45[Existing document is not valid JSON], Subcommand failure, Results=1, C Source=(src/callbacks.c,394), OBJ=Spec<DICT_UPSERT, 'int', 262144, 1>>
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key double
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key double
|
2017-02-06 19:33:22 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key for
|
2017-02-06 19:33:23 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key for
|
2017-02-06 19:33:23 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key try
|
4013ms [I0] {25570} [DEBUG] (server - L:422) <172.23.106.88:11210> (SRV=0x7f78740220d0,IX=0) Scheduling next timeout for 2395 ms
|
2017-02-06 19:33:23 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key try
|
2017-02-06 19:33:23 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key as
|
2017-02-06 19:33:24 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key as
|
2017-02-06 19:33:24 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key while
|
2017-02-06 19:33:24 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] unable to set xattr with key while
|
False is not true
|
2017-02-06 19:33:24 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key else
|
2017-02-06 19:33:25 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key else
|
2017-02-06 19:33:25 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] using key end
|
2017-02-06 19:33:25 | INFO | MainProcess | test_thread | [subdoc_xattr_sdk.test_check_spec_words] successfully set xattr with key end
|
2017-02-06 19:33:25 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED
|
Cluster instance shutdown with force
|
FAIL
|
|
======================================================================
|
FAIL: test_check_spec_words (subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest)
|
----------------------------------------------------------------------
|
Traceback (most recent call last):
|
File "pytests/subdoc/subdoc_xattr_sdk.py", line 510, in test_check_spec_words
|
self.assertTrue(ok, "unable to set xattr with some name. See logs above")
|
AssertionError: unable to set xattr with some name. See logs above
|
|
----------------------------------------------------------------------
|
Ran 1 test in 38.837s
|
|
FAILED (failures=1)
|
test_upsert_order (subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest) ... summary so far suite subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest , pass 0 , fail 1
|
failures so far...
|
subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest.test_check_spec_words
|
testrunner logs, diags and results are available under /home/andrei/couchbase_src/couchbase/testrunner/logs/testrunner-17-Feb-06_19-32-47/test_1
|
Logs will be stored at /home/andrei/couchbase_src/couchbase/testrunner/logs/testrunner-17-Feb-06_19-32-47/test_2
|
|
./testrunner -i andrei.ini use_sdk_client=True,xattr=True,GROUP1=P1 -t subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest.test_upsert_order,skip_cleanup=True
|
|
Test Input params:
|
{'cluster_name': 'andrei', 'conf_file': 'subdoc/py-subdoc-xattr-sdk.conf', 'num_nodes': 1, 'skip_cleanup': 'True', 'use_sdk_client': 'True', 'ini': 'andrei.ini', 'case_number': 2, 'GROUP1': 'P1', 'logs_folder': '/home/andrei/couchbase_src/couchbase/testrunner/logs/testrunner-17-Feb-06_19-32-47/test_2', 'xattr': 'True', 'spec': 'py-subdoc-xattr-sdk'}
|
[2017-02-06 19:33:26,473] - [remote_util:186] INFO - connecting to 172.23.106.88 with username:root password:couchbase ssh_key:
|
[2017-02-06 19:33:29,100] - [remote_util:220] INFO - Connected to 172.23.106.88
|
[2017-02-06 19:33:41,166] - [rest_client:1768] INFO - Node version in cluster 5.0.0-1710-enterprise
|
[2017-02-06 19:33:42,044] - [rest_client:1777] INFO - Node versions in cluster [u'5.0.0-1710-enterprise']
|
[2017-02-06 19:33:42,044] - [basetestcase:167] INFO - ============== basetestcase setup was started for test #2 test_upsert_order==============
|
[2017-02-06 19:33:46,209] - [bucket_helper:138] INFO - deleting existing buckets [u'default'] on 172.23.106.88
|
[2017-02-06 19:33:46,210] - [bucket_helper:140] INFO - remove bucket default ...
|
[2017-02-06 19:33:48,203] - [bucket_helper:154] INFO - deleted bucket : default from 172.23.106.88
|
[2017-02-06 19:33:48,204] - [bucket_helper:230] INFO - waiting for bucket deletion to complete....
|
[2017-02-06 19:33:48,639] - [rest_client:134] INFO - node 172.23.106.88 existing buckets : []
|
[2017-02-06 19:33:50,831] - [cluster_helper:78] INFO - waiting for ns_server @ 172.23.106.88:8091
|
[2017-02-06 19:33:51,270] - [cluster_helper:80] INFO - ns_server @ 172.23.106.88:8091 is running
|
[2017-02-06 19:33:51,270] - [basetestcase:189] INFO - initializing cluster
|
[2017-02-06 19:33:52,372] - [task:117] INFO - server: ip:172.23.106.88 port:8091 ssh_username:root, nodes/self: {'ip': u'127.0.0.1', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@127.0.0.1', 'uptime': u'599848', 'mcdMemoryReserved': 3104, 'hostname': u'172.23.106.88:8091', 'storage': [<membase.api.rest_client.NodeDataStorage object at 0x7f787d638510>], 'moxi': 11211, 'port': u'8091', 'version': u'5.0.0-1710-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 327680, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 3253776384, 'memoryTotal': 4069212160, 'memoryQuota': 2069, 'mcdMemoryAllocated': 3104, 'os': u'x86_64-unknown-linux-gnu', 'ports': []}
|
[2017-02-06 19:33:52,372] - [rest_client:891] INFO - pools/default params : memoryQuota=2069
|
[2017-02-06 19:33:52,815] - [rest_client:928] INFO - settings/indexes params : storageMode=forestdb
|
[2017-02-06 19:33:53,255] - [rest_client:807] INFO - settings/web params on 172.23.106.88:8091:username=Administrator&password=password&port=8091
|
[2017-02-06 19:33:55,448] - [basetestcase:209] INFO - done initializing cluster
|
[2017-02-06 19:33:57,764] - [rest_client:1942] INFO - http://172.23.106.88:8091/pools/default/buckets with param: bucketType=membase&evictionPolicy=valueOnly&threadsNumber=3&ramQuotaMB=2069&proxyPort=11211&authType=sasl&name=default&flushEnabled=1&replicaNumber=1&replicaIndex=1&saslPassword=None
|
[2017-02-06 19:33:58,207] - [rest_client:1964] INFO - 0.44 seconds to create bucket default
|
[2017-02-06 19:33:58,207] - [task:300] WARNING - vbucket map not ready after try 0
|
[2017-02-06 19:33:58,207] - [task:300] WARNING - vbucket map not ready after try 1
|
[2017-02-06 19:33:58,207] - [task:300] WARNING - vbucket map not ready after try 2
|
[2017-02-06 19:33:58,208] - [task:300] WARNING - vbucket map not ready after try 3
|
[2017-02-06 19:33:58,208] - [task:300] WARNING - vbucket map not ready after try 4
|
[2017-02-06 19:33:58,208] - [task:300] WARNING - vbucket map not ready after try 5
|
[2017-02-06 19:33:58,242] - [basetestcase:281] INFO - ============== basetestcase setup was finished for test #2 test_upsert_order ==============
|
39759ms [I1] {25641} [INFO] (instance - L:401) Version=2.7.1_1_g8f2091b, Changeset=8f2091b56b89cda111d5359893d6903df9455229
|
39759ms [I1] {25641} [INFO] (instance - L:402) Effective connection string: couchbase://172.23.106.88/default. Bucket=default
|
39759ms [I1] {25641} [DEBUG] (instance - L:77) Adding host 172.23.106.88:8091 to initial HTTP bootstrap list
|
39759ms [I1] {25641} [DEBUG] (instance - L:77) Adding host 172.23.106.88:11210 to initial CCCP bootstrap list
|
40176ms [I1] {25641} [INFO] (instance - L:135) DNS SRV lookup failed: DNS/Hostname lookup failed
|
40176ms [I1] {25641} [DEBUG] (confmon - L:83) Preparing providers (this may be called multiple times)
|
40176ms [I1] {25641} [DEBUG] (confmon - L:90) Provider CCCP is ENABLED
|
40176ms [I1] {25641} [DEBUG] (confmon - L:90) Provider HTTP is ENABLED
|
40176ms [I1] {25641} [TRACE] (confmon - L:252) Start refresh requested
|
40176ms [I1] {25641} [TRACE] (confmon - L:239) Current provider is CCCP
|
40176ms [I1] {25641} [INFO] (cccp - L:144) Requesting connection to node 172.23.106.88:11210 for CCCP configuration
|
40176ms [I1] {25641} [DEBUG] (lcbio_mgr - L:416) <172.23.106.88:11210> (HE=0x7f7874026d70) Creating new connection because none are available in the pool
|
40176ms [I1] {25641} [DEBUG] (lcbio_mgr - L:321) <172.23.106.88:11210> (HE=0x7f7874026d70) Starting connection on I=0x7f7874072fd0
|
40176ms [I1] {25641} [INFO] (connection - L:450) <172.23.106.88:11210> (SOCK=0x7f7874073140) Starting. Timeout=2000000us
|
40176ms [I1] {25641} [TRACE] (connection - L:267) <172.23.106.88:11210> (SOCK=0x7f7874073140) Got event handler for new connection
|
40176ms [I1] {25641} [TRACE] (connection - L:314) <172.23.106.88:11210> (SOCK=0x7f7874073140) Scheduling asynchronous watch for socket.
|
40393ms [I1] {25641} [TRACE] (connection - L:267) <172.23.106.88:11210> (SOCK=0x7f7874073140) Got event handler for new connection
|
40393ms [I1] {25641} [INFO] (connection - L:116) <172.23.106.88:11210> (SOCK=0x7f7874073140) Connected
|
40393ms [I1] {25641} [DEBUG] (connection - L:123) <172.23.106.88:11210> (SOCK=0x7f7874073140) Successfuly set TCP_NODELAY
|
40393ms [I1] {25641} [DEBUG] (lcbio_mgr - L:271) <172.23.106.88:11210> (HE=0x7f7874026d70) Received result for I=0x7f7874072fd0,C=0x7f7874073140; E=0x0
|
40393ms [I1] {25641} [DEBUG] (lcbio_mgr - L:223) <172.23.106.88:11210> (HE=0x7f7874026d70) Assigning R=0x7f787405ede0 SOCKET=0x7f7874073140
|
40393ms [I1] {25641} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f787405ef20,unknown) Pairing with SOCK=0x7f7874073140
|
41043ms [I1] {25641} [DEBUG] (negotiation - L:378) <172.23.106.88:11210> (SASLREQ=0x7f78740740c0) Found feature 0x3 (TCP NODELAY)
|
41043ms [I1] {25641} [DEBUG] (ioctx - L:151) <172.23.106.88:11210> (CTX=0x7f787405ef20,sasl) Destroying. PND=0,ENT=1,SORC=1
|
41043ms [I1] {25641} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f78740742a0,unknown) Pairing with SOCK=0x7f7874073140
|
41262ms [I1] {25641} [DEBUG] (ioctx - L:151) <172.23.106.88:11210> (CTX=0x7f78740742a0,bc_cccp) Destroying. PND=0,ENT=1,SORC=1
|
41262ms [I1] {25641} [INFO] (lcbio_mgr - L:491) <172.23.106.88:11210> (HE=0x7f7874026d70) Placing socket back into the pool. I=0x7f7874072fd0,C=0x7f7874073140
|
41263ms [I1] {25641} [INFO] (confmon - L:153) Setting new configuration. Received via CCCP
|
41263ms [I1] {25641} [DEBUG] (bootstrap - L:56) Instance configured!
|
41263ms [I1] {25641} [DEBUG] (confmon - L:83) Preparing providers (this may be called multiple times)
|
41263ms [I1] {25641} [DEBUG] (confmon - L:90) Provider CCCP is ENABLED
|
41263ms [I1] {25641} [INFO] (lcbio_mgr - L:407) <172.23.106.88:11210> (HE=0x7f7874026d70) Found ready connection in pool. Reusing socket and not creating new connection
|
41263ms [I1] {25641} [DEBUG] (lcbio_mgr - L:223) <172.23.106.88:11210> (HE=0x7f7874026d70) Assigning R=0x7f78740730f0 SOCKET=0x7f7874073140
|
41263ms [I1] {25641} [DEBUG] (ioctx - L:101) <172.23.106.88:11210> (CTX=0x7f787405eef0,unknown) Pairing with SOCK=0x7f7874073140
|
41263ms [I1] {25641} [DEBUG] (server - L:499) <172.23.106.88:11210> (SRV=0x7f78740b05b0,IX=0) Setting initial timeout=2499ms
|
ERROR
|
[2017-02-06 19:34:02,923] - [basetestcase:304] WARNING - CLEANUP WAS SKIPPED
|
Cluster instance shutdown with force
|
|
======================================================================
|
ERROR: test_upsert_order (subdoc.subdoc_xattr_sdk.SubdocXattrSdkTest)
|
----------------------------------------------------------------------
|
Traceback (most recent call last):
|
File "pytests/subdoc/subdoc_xattr_sdk.py", line 542, in test_upsert_order
|
rv = self.client.mutate_in(k, SD.upsert('integer', 2, xattr=True))
|
File "/usr/local/lib/python2.7/dist-packages/couchbase-2.2.0.dev2+g18f32fd-py2.7-linux-x86_64.egg/couchbase/bucket.py", line 783, in mutate_in
|
return super(Bucket, self).mutate_in(key, specs, **kwargs)
|
_DocumentNotJsonError_0x45 (generated, catch DocumentNotJsonError): <RC=0x45[Existing document is not valid JSON], Subcommand failure, Results=1, C Source=(src/callbacks.c,394), OBJ=Spec<DICT_UPSERT, 'integer', 262144, 2>>
|