vikass-MacBook-Pro:testrunner vikaschaudhary$ ./testrunner -i vikas-nodes.ini -t eventing.eventing_recovery.EventingRecovery.test_killing_eventing_consumer_when_eventing_is_processing_mutations,nodes_init=4,services_init=kv-eventing-index-n1ql,dataset=default,groups=simple,reset_services=True,skip_cleanup=True,doc-per-day=2 Global Test input params: {'cluster_name': 'vikas-nodes', 'ini': 'vikas-nodes.ini', 'num_nodes': 4} Logs will be stored at /Users/vikaschaudhary/workspace/testrunner/logs/testrunner-19-Apr-04_11-22-09/test_1 ./testrunner -i vikas-nodes.ini -p -t eventing.eventing_recovery.EventingRecovery.test_killing_eventing_consumer_when_eventing_is_processing_mutations,nodes_init=4,services_init=kv-eventing-index-n1ql,dataset=default,groups=simple,reset_services=True,skip_cleanup=True,doc-per-day=2 Test Input params: {'cluster_name': 'vikas-nodes', 'doc-per-day': '2', 'logs_folder': '/Users/vikaschaudhary/workspace/testrunner/logs/testrunner-19-Apr-04_11-22-09/test_1', 'reset_services': 'True', 'dataset': 'default', 'skip_cleanup': 'True', 'services_init': 'kv-eventing-index-n1ql', 'ini': 'vikas-nodes.ini', 'groups': 'simple', 'case_number': 1, 'num_nodes': 4, 'nodes_init': '4'} Run before suite setup for eventing.eventing_recovery.EventingRecovery.test_killing_eventing_consumer_when_eventing_is_processing_mutations test_killing_eventing_consumer_when_eventing_is_processing_mutations (eventing.eventing_recovery.EventingRecovery) ... 2019-04-04 11:22:09 | INFO | MainProcess | test_thread | [eventing_base.setUp] Starting Test: test_killing_eventing_consumer_when_eventing_is_processing_mutations 2019-04-04 11:22:09 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root /usr/local/lib/python2.7/site-packages/paramiko/rsakey.py:119: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead. algorithm=hashes.SHA1(), /usr/local/lib/python2.7/site-packages/paramiko/rsakey.py:99: CryptographyDeprecationWarning: signer and verifier have been deprecated. Please use sign and verify instead. algorithm=hashes.SHA1(), 2019-04-04 11:22:09 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [rest_client.get_nodes_version] Node version in cluster 6.5.0-2830-enterprise 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [rest_client.get_nodes_versions] Node versions in cluster [u'6.5.0-2830-enterprise', u'6.5.0-2830-enterprise'] 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [basetestcase.setUp] ============== basetestcase setup was started for test #1 test_killing_eventing_consumer_when_eventing_is_processing_mutations============== 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleting existing buckets [u'dst_bucket', u'metadata', u'src_bucket'] on 10.143.190.101 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket dst_bucket ... 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : dst_bucket from 10.143.190.101 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete.... 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 10.143.190.101 existing buckets : [u'metadata', u'src_bucket'] 2019-04-04 11:22:10 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket metadata ... 2019-04-04 11:22:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : metadata from 10.143.190.101 2019-04-04 11:22:11 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete.... 2019-04-04 11:22:11 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 10.143.190.101 existing buckets : [u'src_bucket'] 2019-04-04 11:22:11 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] remove bucket src_bucket ... 2019-04-04 11:22:12 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] deleted bucket : src_bucket from 10.143.190.101 2019-04-04 11:22:12 | INFO | MainProcess | test_thread | [bucket_helper.wait_for_bucket_deletion] waiting for bucket deletion to complete.... 2019-04-04 11:22:12 | INFO | MainProcess | test_thread | [rest_client.bucket_exists] node 10.143.190.101 existing buckets : [] 2019-04-04 11:22:12 | INFO | MainProcess | test_thread | [bucket_helper.delete_all_buckets_or_assert] sleep 2 seconds to make sure all buckets ([u'dst_bucket', u'metadata', u'src_bucket']) were deleted completely. 2019-04-04 11:22:14 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] rebalancing all nodes in order to remove nodes 2019-04-04 11:22:14 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance params : {'password': 'password', 'ejectedNodes': u'ns_1@10.143.190.102', 'user': 'Administrator', 'knownNodes': u'ns_1@10.143.190.101,ns_1@10.143.190.102'} 2019-04-04 11:22:14 | INFO | MainProcess | test_thread | [rest_client.rebalance] rebalance operation started 2019-04-04 11:22:14 | INFO | MainProcess | test_thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 0.00 % 2019-04-04 11:22:24 | INFO | MainProcess | test_thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 50.00 % 2019-04-04 11:22:44 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] rebalance progress took 30.04 seconds 2019-04-04 11:22:44 | INFO | MainProcess | test_thread | [rest_client.monitorRebalance] sleep for 10 seconds after rebalance... 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.cleanup_cluster] removed all the nodes from cluster associated with ip:10.143.190.101 port:8091 ssh_username:root ? [(u'ns_1@10.143.190.102', 8091)] 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 10.143.190.101:8091 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 10.143.190.101:8091 is running 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 10.143.190.102:8091 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 10.143.190.102:8091 is running 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 10.143.190.103:8091 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 10.143.190.103:8091 is running 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] waiting for ns_server @ 10.143.190.104:8091 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [cluster_helper.wait_for_ns_servers_or_assert] ns_server @ 10.143.190.104:8091 is running 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] cannot find service node eventing in cluster 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics ------- 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.101:8091 => {'swap_mem_used': 15732736, 'cpu_utilization': 0, 'mem_free': 1400176640, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'kv']} 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics --- 2019-04-04 11:22:54 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED Cluster instance shutdown with force 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [basetestcase.setUp] initializing cluster 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:22:54 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:22:55 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:22:55 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:22:56 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.101 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:22:56 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2019-04-04 11:22:56 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: systemctl stop couchbase-server.service 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.101 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: systemctl start couchbase-server.service 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:22:59 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2019-04-04 11:23:00 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:23:00 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:23:00 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:23:01 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:23:01 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.102 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:01 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2019-04-04 11:23:01 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: systemctl stop couchbase-server.service 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:23:04 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.102 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: systemctl start couchbase-server.service 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.103 with username:root 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.103 2019-04-04 11:23:05 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.103 with username:root 2019-04-04 11:23:06 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.103 2019-04-04 11:23:06 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.103 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:06 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2019-04-04 11:23:06 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.103: systemctl stop couchbase-server.service 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.103: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.103: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.103 with username:root 2019-04-04 11:23:09 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.103 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.103 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.103: systemctl start couchbase-server.service 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.104 with username:root 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.104 2019-04-04 11:23:10 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.104 with username:root 2019-04-04 11:23:11 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.104 2019-04-04 11:23:11 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.104 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:11 | INFO | MainProcess | test_thread | [remote_util.stop_couchbase] Running systemd command on this server 2019-04-04 11:23:11 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.104: systemctl stop couchbase-server.service 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [basetestcase.stop_server] Couchbase stopped 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.104: rm -rf /opt/couchbase/var/lib/couchbase/data/* 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.104: rm -rf /opt/couchbase/var/lib/couchbase/config/* 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.104 with username:root 2019-04-04 11:23:14 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.104 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [remote_util.is_couchbase_installed] 10.143.190.104 **** The version file /opt/couchbase/ VERSION.txt exists 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [remote_util.start_couchbase] Running systemd command on this server 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.104: systemctl start couchbase-server.service 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [basetestcase.start_server] Couchbase started 2019-04-04 11:23:15 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 10 secs. ... 2019-04-04 11:23:25 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:10.143.190.101 port:8091 ssh_username:root, nodes/self: {'ip': u'10.143.190.101', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@10.143.190.101', 'uptime': u'18', 'mcdMemoryReserved': 1600, 'storageTotalRam': 2000, 'hostname': u'10.143.190.101:8091', 'storage': [], 'moxi': 11211, 'port': u'8091', 'version': u'6.5.0-2830-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 393221, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 1684975616, 'memoryTotal': 2097688576, 'memoryQuota': 320, 'mcdMemoryAllocated': 1600, 'os': u'x86_64-unknown-linux-gnu', 'ports': []} 2019-04-04 11:23:25 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://10.143.190.101:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': '*/*', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 404 reason: unknown "unknown pool" auth: Administrator:password 2019-04-04 11:23:25 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=1066 2019-04-04 11:23:25 | INFO | MainProcess | Cluster_Thread | [rest_client.init_node_services] /node/controller/setupServices params on 10.143.190.101: 8091:services=kv&password=password&hostname=10.143.190.101&user=Administrator 2019-04-04 11:23:25 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 10.143.190.101:8091:username=Administrator&password=password&port=8091 2019-04-04 11:23:25 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: curl http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.101:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.101:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:10.143.190.102 port:8091 ssh_username:root, nodes/self: {'ip': u'10.143.190.102', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@10.143.190.102', 'uptime': u'14', 'mcdMemoryReserved': 1600, 'storageTotalRam': 2000, 'hostname': u'10.143.190.102:8091', 'storage': [], 'moxi': 11211, 'port': u'8091', 'version': u'6.5.0-2830-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 393221, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 1742950400, 'memoryTotal': 2097688576, 'memoryQuota': 320, 'mcdMemoryAllocated': 1600, 'os': u'x86_64-unknown-linux-gnu', 'ports': []} 2019-04-04 11:23:26 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://10.143.190.102:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': '*/*', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 404 reason: unknown "unknown pool" auth: Administrator:password 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=1066 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 10.143.190.102:8091:username=Administrator&password=password&port=8091 2019-04-04 11:23:26 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: curl http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.102:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.102:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:10.143.190.103 port:8091 ssh_username:root, nodes/self: {'ip': u'127.0.0.1', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@127.0.0.1', 'uptime': u'13', 'mcdMemoryReserved': 1600, 'storageTotalRam': 2000, 'hostname': u'10.143.190.103:8091', 'storage': [], 'moxi': 11211, 'port': u'8091', 'version': u'6.5.0-2830-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 393221, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 1683677184, 'memoryTotal': 2097688576, 'memoryQuota': 320, 'mcdMemoryAllocated': 1600, 'os': u'x86_64-unknown-linux-gnu', 'ports': []} 2019-04-04 11:23:27 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://10.143.190.103:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': '*/*', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 404 reason: unknown "unknown pool" auth: Administrator:password 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=1066 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 10.143.190.103:8091:username=Administrator&password=password&port=8091 2019-04-04 11:23:27 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] connecting to 10.143.190.103 with username:root 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] Connected to 10.143.190.103 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.103: curl http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.103:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.103:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [task.execute] server: ip:10.143.190.104 port:8091 ssh_username:root, nodes/self: {'ip': u'127.0.0.1', 'availableStorage': [], 'rest_username': '', 'id': u'ns_1@127.0.0.1', 'uptime': u'8', 'mcdMemoryReserved': 1600, 'storageTotalRam': 2000, 'hostname': u'10.143.190.104:8091', 'storage': [], 'moxi': 11211, 'port': u'8091', 'version': u'6.5.0-2830-enterprise', 'memcached': 11210, 'status': u'healthy', 'clusterCompatibility': 393221, 'curr_items': 0, 'services': [u'kv'], 'rest_password': '', 'clusterMembership': u'active', 'memoryFree': 1669611520, 'memoryTotal': 2097688576, 'memoryQuota': 320, 'mcdMemoryAllocated': 1600, 'os': u'x86_64-unknown-linux-gnu', 'ports': []} 2019-04-04 11:23:28 | ERROR | MainProcess | Cluster_Thread | [rest_client._http_request] GET http://10.143.190.104:8091/pools/default body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': '*/*', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 404 reason: unknown "unknown pool" auth: Administrator:password 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster_memoryQuota] pools/default params : memoryQuota=1066 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [rest_client.init_cluster] settings/web params on 10.143.190.104:8091:username=Administrator&password=password&port=8091 2019-04-04 11:23:28 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] connecting to 10.143.190.104 with username:root 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [remote_util.__init__] Connected to 10.143.190.104 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.104: curl http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.104:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.104:8091: True content: [6,5] command: cluster_compat_mode:get_compat_version(). 2019-04-04 11:23:29 | INFO | MainProcess | Cluster_Thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2019-04-04 11:23:29 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add built-in 'cbadminbucket' user to node 10.143.190.101 **** 2019-04-04 11:23:29 | ERROR | MainProcess | test_thread | [rest_client._http_request] DELETE http://10.143.190.101:8091/settings/rbac/users/local/cbadminbucket body: headers: {'Content-Type': 'application/x-www-form-urlencoded', 'Accept': '*/*', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 404 reason: unknown "User was not found." auth: Administrator:password 2019-04-04 11:23:29 | INFO | MainProcess | test_thread | [internal_user.delete_user] Exception while deleting user. Exception is -"User was not found." 2019-04-04 11:23:29 | INFO | MainProcess | test_thread | [basetestcase.add_built_in_server_user] **** add 'admin' role to 'cbadminbucket' user **** 2019-04-04 11:23:29 | INFO | MainProcess | test_thread | [basetestcase.setUp] done initializing cluster 2019-04-04 11:23:30 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 10.143.190.102:8091 to cluster 2019-04-04 11:23:30 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @10.143.190.102:8091 to this cluster @10.143.190.101:8091 2019-04-04 11:23:32 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 10.143.190.103:8091 to cluster 2019-04-04 11:23:32 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @10.143.190.103:8091 to this cluster @10.143.190.101:8091 2019-04-04 11:23:35 | INFO | MainProcess | Cluster_Thread | [task.add_nodes] adding node 10.143.190.104:8091 to cluster 2019-04-04 11:23:35 | INFO | MainProcess | Cluster_Thread | [rest_client.add_node] adding remote node @10.143.190.104:8091 to this cluster @10.143.190.101:8091 2019-04-04 11:23:37 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance params : {'password': 'password', 'ejectedNodes': '', 'user': 'Administrator', 'knownNodes': u'ns_1@10.143.190.103,ns_1@10.143.190.104,ns_1@10.143.190.101,ns_1@10.143.190.102'} 2019-04-04 11:23:37 | INFO | MainProcess | Cluster_Thread | [rest_client.rebalance] rebalance operation started 2019-04-04 11:23:37 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 0.00 % 2019-04-04 11:23:37 | INFO | MainProcess | Cluster_Thread | [task.check] Rebalance - status: running, progress: 0.00% 2019-04-04 11:23:47 | INFO | MainProcess | Cluster_Thread | [rest_client._rebalance_status_and_progress] rebalance percentage : 50.00 % 2019-04-04 11:23:47 | INFO | MainProcess | Cluster_Thread | [task.check] Rebalance - status: running, progress: 50.00% 2019-04-04 11:23:57 | INFO | MainProcess | Cluster_Thread | [task.check] Rebalance - status: none, progress: 100.00% 2019-04-04 11:23:57 | INFO | MainProcess | Cluster_Thread | [task.check] rebalancing was completed with progress: 100% in 20.1142430305 sec 2019-04-04 11:23:57 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:23:57 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.101: curl http://Administrator:password@localhost:8091/diag/eval -X POST -d 'ns_config:set(allow_nonlocal_eval, true).' 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.enable_diag_eval_on_non_local_hosts] Enabled diag/eval for non-local hosts from 10.143.190.101 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.setUp] ============== basetestcase setup was finished for test #1 test_killing_eventing_consumer_when_eventing_is_processing_mutations ============== 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics ------- 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.103:8091 => {'swap_mem_used': 512000, 'cpu_utilization': 0, 'mem_free': 1657700352, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'index']} 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.104:8091 => {'swap_mem_used': 286720, 'cpu_utilization': 2, 'mem_free': 1651175424, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'n1ql']} 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.101:8091 => {'swap_mem_used': 15732736, 'cpu_utilization': 2, 'mem_free': 1654562816, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'kv']} 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.102:8091 => {'swap_mem_used': 15368192, 'cpu_utilization': 0, 'mem_free': 1716256768, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'eventing']} 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics --- 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.101 with username:root 2019-04-04 11:23:58 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.101 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:10.143.190.104 port:8091 ssh_username:root] 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [rest_client.set_indexer_storage_mode] settings/indexes params : storageMode=plasma 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [eventing_base.setUp] Setting the min possible memory quota so that adding mode nodes to the cluster wouldn't be a problem. 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [rest_client.set_service_memoryQuota] pools/default params : memoryQuota=330 2019-04-04 11:23:59 | INFO | MainProcess | test_thread | [rest_client.set_service_memoryQuota] pools/default params : indexMemoryQuota=256 2019-04-04 11:24:00 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of n1ql nodes in cluster: [ip:10.143.190.104 port:8091 ssh_username:root] 2019-04-04 11:24:00 | INFO | MainProcess | test_thread | [rest_client.set_service_memoryQuota] pools/default params : memoryQuota=700 2019-04-04 11:24:00 | INFO | MainProcess | test_thread | [eventing_recovery.setUp] 100 2019-04-04 11:24:00 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.143.190.101:8091/pools/default/buckets with param: bucketType=membase&threadsNumber=3&authType=none&compressionMode=passive&replicaIndex=1&name=src_bucket&evictionPolicy=valueOnly&flushEnabled=1&replicaNumber=1&ramQuotaMB=100 2019-04-04 11:24:00 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.01 seconds to create bucket src_bucket 2019-04-04 11:24:00 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : src_bucket in 10.143.190.101 to accept set ops 2019-04-04 11:24:02 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 src_bucket 2019-04-04 11:24:02 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 src_bucket 2019-04-04 11:24:04 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'src_bucket' was created with per node RAM quota: 100 2019-04-04 11:24:05 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.143.190.101:8091/pools/default/buckets with param: bucketType=membase&threadsNumber=3&authType=none&compressionMode=passive&replicaIndex=1&name=dst_bucket&evictionPolicy=valueOnly&flushEnabled=1&replicaNumber=1&ramQuotaMB=100 2019-04-04 11:24:05 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.02 seconds to create bucket dst_bucket 2019-04-04 11:24:05 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : dst_bucket in 10.143.190.101 to accept set ops 2019-04-04 11:24:07 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 dst_bucket 2019-04-04 11:24:08 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 dst_bucket 2019-04-04 11:24:09 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'dst_bucket' was created with per node RAM quota: 100 2019-04-04 11:24:10 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] http://10.143.190.101:8091/pools/default/buckets with param: bucketType=membase&threadsNumber=3&authType=none&compressionMode=passive&replicaIndex=1&name=metadata&evictionPolicy=valueOnly&flushEnabled=1&replicaNumber=1&ramQuotaMB=400 2019-04-04 11:24:10 | INFO | MainProcess | Cluster_Thread | [rest_client.create_bucket] 0.02 seconds to create bucket metadata 2019-04-04 11:24:10 | INFO | MainProcess | Cluster_Thread | [bucket_helper.wait_for_memcached] waiting for memcached bucket : metadata in 10.143.190.101 to accept set ops 2019-04-04 11:24:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 metadata 2019-04-04 11:24:13 | INFO | MainProcess | Cluster_Thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 metadata 2019-04-04 11:24:14 | INFO | MainProcess | Cluster_Thread | [task.check] bucket 'metadata' was created with per node RAM quota: 400 2019-04-04 11:24:14 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:24:14 | INFO | MainProcess | test_thread | [eventing_base.deploy_function] Deploying the following handler code : Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations with {'buckets': [{'alias': 'dst_bucket', 'bucket_name': 'dst_bucket'}], 'source_bucket': 'src_bucket', 'metadata_bucket': 'metadata'} 2019-04-04 11:24:14 | INFO | MainProcess | test_thread | [eventing_base.deploy_function] function OnUpdate(doc, meta) { var doc_id = meta.id; log('creating document for : ', doc); dst_bucket[doc_id] = {'doc_id' : doc_id}; // SET operation } // This is intentionally left blank function OnDelete(meta) { log('deleting document', meta.id); delete dst_bucket[meta.id]; // DELETE operation } 2019-04-04 11:24:16 | INFO | MainProcess | test_thread | [eventing_base.deploy_function] deploy Application : {"code":0,"info":{"status":"Stored function: 'Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations' in metakv","warnings":[" Do not use in production environments"]}} 2019-04-04 11:24:16 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs. Waiting for eventing node to come out of bootstrap state... ... 2019-04-04 11:24:46 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs. Waiting for eventing node to come out of bootstrap state... ... 2019-04-04 11:25:16 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs. Waiting for eventing node to come out of bootstrap state... ... 2019-04-04 11:25:47 | INFO | MainProcess | test_thread | [basetestcase.load] create 4032 to src_bucket documents... 2019-04-04 11:25:47 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 src_bucket 2019-04-04 11:26:00 | INFO | MainProcess | test_thread | [basetestcase.load] LOAD IS FINISHED 2019-04-04 11:26:00 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:26:00 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:26:00 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: killall -9 eventing-consumer 2019-04-04 11:26:00 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:26:01 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 20 secs. Waiting for Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations to deployed... ... 2019-04-04 11:26:21 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 20 secs. Waiting for Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations to deployed... ... 2019-04-04 11:26:41 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs. ... 2019-04-04 11:27:11 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:27:11 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:27:11 | INFO | MainProcess | test_thread | [eventing_base.verify_eventing_results] Final docs count... Current : 4032 Expected : 4032 2019-04-04 11:27:12 | INFO | MainProcess | test_thread | [eventing_base.verify_eventing_results] Stats for Node 10.143.190.102 is [ { "dcp_feed_boundary": "everything", "event_processing_stats": { "adhoc_timer_response_received": 9, "agg_messages_sent_to_worker": 5893, "agg_queue_memory": 0, "agg_queue_memory_cap": 132120576, "agg_queue_size": 0, "agg_queue_size_cap": 300000, "agg_timer_feedback_queue_cap": 1500, "agg_timer_feedback_queue_size": 0, "dcp_mutation": 3804, "dcp_mutation_sent_to_worker": 3804, "dcp_snapshot": 994, "dcp_stream_req_counter": 1024, "dcp_streamreq": 1024, "execution_stats": 210, "failure_stats": 210, "latency_stats": 210, "lcb_exception_stats": 210, "log_level": 3, "thr_count": 3, "thr_map": 3, "v8_init": 3, "v8_load": 3, "worker_spawn_counter": 3 }, "events_remaining": { "dcp_backlog": 0 }, "execution_stats": { "agg_queue_memory": 0, "agg_queue_size": 0, "dcp_delete_msg_counter": 0, "dcp_delete_parse_failure": 0, "dcp_mutation_msg_counter": 3804, "dcp_mutation_parse_failure": 0, "enqueued_dcp_delete_msg_counter": 0, "enqueued_dcp_mutation_msg_counter": 3804, "enqueued_timer_msg_counter": 0, "feedback_queue_size": 0, "filtered_dcp_delete_counter": 0, "filtered_dcp_mutation_counter": 0, "lcb_retry_failure": 0, "messages_parsed": 5881, "on_delete_failure": 0, "on_delete_success": 0, "on_update_failure": 0, "on_update_success": 3804, "timer_create_failure": 0, "timer_msg_counter": 0, "timer_responses_sent": 0, "timestamp": { "19136": "2019-04-04T05:57:12Z", "19137": "2019-04-04T05:57:11Z", "19138": "2019-04-04T05:57:11Z" }, "uv_try_write_failure_counter": 0 }, "failure_stats": { "app_worker_setting_events_lost": 0, "bucket_op_exception_count": 0, "checkpoint_failure_count": 0, "dcp_events_lost": 0, "debugger_events_lost": 0, "delete_events_lost": 0, "mutation_events_lost": 0, "n1ql_op_exception_count": 0, "timeout_count": 0, "timer_callback_missing_counter": 0, "timer_context_size_exceeded_counter": 0, "timer_events_lost": 0, "timestamp": { "19136": "2019-04-04T05:57:12Z", "19137": "2019-04-04T05:57:11Z", "19138": "2019-04-04T05:57:11Z" }, "v8worker_events_lost": 0 }, "function_id": 1182888502, "function_name": "Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations", "gocb_creds_request_counter": 14, "internal_vb_distribution_stats": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": "[0-341]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": "[342-682]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": "[683-1023]" }, "latency_percentile_stats": { "100": 44900, "50": 600, "80": 1100, "90": 1800, "95": 2900, "99": 6600 }, "lcb_creds_request_counter": 12, "lcb_exception_stats": {}, "metastore_stats": {}, "planner_stats": [ { "host_name": "10.143.190.102:8096", "start_vb": 0, "vb_count": 1024 } ], "vb_distribution_stats_from_metadata": { "10.143.190.102:8096": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": "[0-341]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": "[342-682]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": "[683-1023]" } }, "worker_pids": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": 19138, "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": 19136, "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": 19137 } } ] 2019-04-04 11:27:13 | INFO | MainProcess | test_thread | [basetestcase.load] delete 4032 to src_bucket documents... 2019-04-04 11:27:13 | INFO | MainProcess | test_thread | [data_helper.direct_client] creating direct client 10.143.190.101:11210 src_bucket 2019-04-04 11:27:29 | INFO | MainProcess | test_thread | [basetestcase.load] LOAD IS FINISHED 2019-04-04 11:27:29 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:27:29 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:27:30 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: killall -9 eventing-consumer 2019-04-04 11:27:30 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:27:30 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:27:30 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:27:30 | INFO | MainProcess | test_thread | [basetestcase.sleep] sleep for 30 secs. Waiting for handler code Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations to complete bucket operations... Current : 191 Expected : 0 ... 2019-04-04 11:28:00 | INFO | MainProcess | test_thread | [eventing_base.verify_eventing_results] Final docs count... Current : 0 Expected : 0 2019-04-04 11:28:02 | INFO | MainProcess | test_thread | [eventing_base.verify_eventing_results] Stats for Node 10.143.190.102 is [ { "dcp_feed_boundary": "everything", "event_processing_stats": { "adhoc_timer_response_received": 3, "agg_messages_sent_to_worker": 3110, "agg_queue_memory": 0, "agg_queue_memory_cap": 132120576, "agg_queue_size": 0, "agg_queue_size_cap": 300000, "agg_timer_feedback_queue_cap": 1500, "agg_timer_feedback_queue_size": 0, "dcp_deletion": 2107, "dcp_deletion_sent_to_worker": 2107, "dcp_snapshot": 522, "dcp_stream_req_counter": 550, "dcp_streamreq": 541, "execution_stats": 90, "failure_stats": 90, "is_bootstrapping": 3, "is_rebalance_ongoing": 3, "latency_stats": 90, "lcb_exception_stats": 90, "log_level": 3, "reb_vb_remaining_to_own": 486, "reb_vb_remaining_to_stream_req": 474, "thr_count": 3, "thr_map": 3, "v8_init": 3, "v8_load": 3, "worker_spawn_counter": 6 }, "events_remaining": { "dcp_backlog": 3837 }, "execution_stats": { "agg_queue_memory": 0, "agg_queue_size": 0, "dcp_delete_msg_counter": 2107, "dcp_delete_parse_failure": 0, "dcp_mutation_msg_counter": 0, "dcp_mutation_parse_failure": 0, "enqueued_dcp_delete_msg_counter": 2107, "enqueued_dcp_mutation_msg_counter": 0, "enqueued_timer_msg_counter": 0, "feedback_queue_size": 0, "filtered_dcp_delete_counter": 0, "filtered_dcp_mutation_counter": 0, "lcb_retry_failure": 0, "messages_parsed": 3098, "on_delete_failure": 0, "on_delete_success": 2107, "on_update_failure": 0, "on_update_success": 0, "timer_create_failure": 0, "timer_msg_counter": 0, "timer_responses_sent": 0, "timestamp": { "19214": "2019-04-04T05:58:01Z", "19215": "2019-04-04T05:58:01Z", "19216": "2019-04-04T05:58:01Z" }, "uv_try_write_failure_counter": 0 }, "failure_stats": { "app_worker_setting_events_lost": 0, "bucket_op_exception_count": 0, "checkpoint_failure_count": 0, "dcp_events_lost": 0, "debugger_events_lost": 0, "delete_events_lost": 0, "mutation_events_lost": 0, "n1ql_op_exception_count": 0, "timeout_count": 0, "timer_callback_missing_counter": 0, "timer_context_size_exceeded_counter": 0, "timer_events_lost": 0, "timestamp": { "19214": "2019-04-04T05:58:01Z", "19215": "2019-04-04T05:58:01Z", "19216": "2019-04-04T05:58:01Z" }, "v8worker_events_lost": 0 }, "function_id": 1182888502, "function_name": "Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations", "gocb_creds_request_counter": 20, "internal_vb_distribution_stats": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": "[0-177]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": "[342-521]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": "[683-862]" }, "latency_percentile_stats": { "100": 44900, "50": 500, "80": 1100, "90": 1900, "95": 3300, "99": 7600 }, "lcb_creds_request_counter": 18, "lcb_exception_stats": {}, "metastore_stats": {}, "planner_stats": [ { "host_name": "10.143.190.102:8096", "start_vb": 0, "vb_count": 1024 } ], "vb_distribution_stats_from_metadata": { "10.143.190.102:8096": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": "[0-49, 56-341]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": "[342-682]", "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": "[683-738, 740-1023]" } }, "worker_pids": { "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_0": 19215, "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_1": 19214, "worker_Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations_2": 19216 } } ] 2019-04-04 11:28:03 | ERROR | MainProcess | test_thread | [rest_client._http_request] POST http://10.143.190.102:8096/api/v1/functions/Function_445021330_test_killing_eventing_consumer_when_eventing_is_processing_mutations/settings body: {"processing_status": false, "deployment_status": false} headers: {'Content-type': 'application/json', 'Authorization': 'Basic QWRtaW5pc3RyYXRvcjpwYXNzd29yZA==\n'} error: 406 reason: unknown {"name":"ERR_REBALANCE_ONGOING","code":36,"description":"Rebalance ongoing on some/all Eventing nodes, creating new functions, deployment or undeployment of existing functions is not allowed","attributes":null,"runtime_info":{"code":36,"info":"Rebalance ongoing on some/all Eventing nodes, creating new functions, deployment or undeployment of existing functions is not allowed"}} auth: Administrator:password ERROR 2019-04-04 11:28:03 | INFO | MainProcess | test_thread | [basetestcase.get_nodes_from_services_map] list of eventing nodes in cluster: [ip:10.143.190.102 port:8091 ssh_username:root] 2019-04-04 11:28:03 | INFO | MainProcess | test_thread | [remote_util.__init__] connecting to 10.143.190.102 with username:root 2019-04-04 11:28:03 | INFO | MainProcess | test_thread | [remote_util.__init__] Connected to 10.143.190.102 2019-04-04 11:28:03 | INFO | MainProcess | test_thread | [rest_client.diag_eval] /diag/eval status on 10.143.190.102:8091: True content: "/opt/couchbase/var/lib/couchbase/logs" command: filename:absname(element(2, application:get_env(ns_server,error_logger_mf_dir))). 2019-04-04 11:28:03 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: zgrep "panic" "/opt/couchbase/var/lib/couchbase/logs"/eventing.log* | wc -l 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] running command.raw on 10.143.190.102: ls "/opt/couchbase/var/lib/couchbase/logs"/../crash/| wc -l 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [remote_util.execute_command_raw] command executed successfully 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [eventing_base.tearDown] Bucket dst_bucket DGM is 100 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [eventing_base.tearDown] Bucket metadata DGM is 100 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [eventing_base.tearDown] Bucket src_bucket DGM is 100 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] ------- Cluster statistics ------- 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.103:8091 => {'swap_mem_used': 512000, 'cpu_utilization': 0, 'mem_free': 1628426240, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'index']} 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.104:8091 => {'swap_mem_used': 286720, 'cpu_utilization': 6, 'mem_free': 1626791936, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'n1ql']} 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.101:8091 => {'swap_mem_used': 15720448, 'cpu_utilization': 20.2247191011236, 'mem_free': 1379012608, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'kv']} 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] 10.143.190.102:8091 => {'swap_mem_used': 15302656, 'cpu_utilization': 100, 'mem_free': 1406840832, 'swap_mem_total': 536866816, 'mem_total': 2097688576, 'services': [u'eventing']} 2019-04-04 11:28:05 | INFO | MainProcess | test_thread | [basetestcase.print_cluster_stats] --- End of cluster statistics --- 2019-04-04 11:28:05 | WARNING | MainProcess | test_thread | [basetestcase.tearDown] CLEANUP WAS SKIPPED Cluster instance shutdown with force ====================================================================== ERROR: test_killing_eventing_consumer_when_eventing_is_processing_mutations (eventing.eventing_recovery.EventingRecovery) ---------------------------------------------------------------------- Traceback (most recent call last): File "pytests/eventing/eventing_recovery.py", line 129, in test_killing_eventing_consumer_when_eventing_is_processing_mutations self.undeploy_and_delete_function(body) File "./pytests/eventing/eventing_base.py", line 300, in undeploy_and_delete_function self.undeploy_function(body) File "./pytests/eventing/eventing_base.py", line 313, in undeploy_function content = self.rest.undeploy_function(body['appname']) File "./lib/membase/api/rest_client.py", line 4244, in undeploy_function raise Exception(content) Exception: {"name":"ERR_REBALANCE_ONGOING","code":36,"description":"Rebalance ongoing on some/all Eventing nodes, creating new functions, deployment or undeployment of existing functions is not allowed","attributes":null,"runtime_info":{"code":36,"info":"Rebalance ongoing on some/all Eventing nodes, creating new functions, deployment or undeployment of existing functions is not allowed"}} ---------------------------------------------------------------------- Ran 1 test in 356.594s